2025-05-28 18:25:54.280091 | Job console starting 2025-05-28 18:25:54.297486 | Updating git repos 2025-05-28 18:25:54.876284 | Cloning repos into workspace 2025-05-28 18:25:55.065699 | Restoring repo states 2025-05-28 18:25:55.091570 | Merging changes 2025-05-28 18:25:55.091600 | Checking out repos 2025-05-28 18:25:55.326435 | Preparing playbooks 2025-05-28 18:25:56.016138 | Running Ansible setup 2025-05-28 18:26:00.381543 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-28 18:26:01.160593 | 2025-05-28 18:26:01.160779 | PLAY [Base pre] 2025-05-28 18:26:01.179324 | 2025-05-28 18:26:01.179733 | TASK [Setup log path fact] 2025-05-28 18:26:01.233236 | orchestrator | ok 2025-05-28 18:26:01.265309 | 2025-05-28 18:26:01.265549 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-28 18:26:01.324810 | orchestrator | ok 2025-05-28 18:26:01.340339 | 2025-05-28 18:26:01.340459 | TASK [emit-job-header : Print job information] 2025-05-28 18:26:01.392536 | # Job Information 2025-05-28 18:26:01.392756 | Ansible Version: 2.16.14 2025-05-28 18:26:01.392798 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-28 18:26:01.392839 | Pipeline: post 2025-05-28 18:26:01.392869 | Executor: 521e9411259a 2025-05-28 18:26:01.392895 | Triggered by: https://github.com/osism/testbed/commit/670463e97d2d73852c0c65ef2fae5406bc488fe3 2025-05-28 18:26:01.392921 | Event ID: 64547318-3bed-11f0-9b88-c641d171509a 2025-05-28 18:26:01.400225 | 2025-05-28 18:26:01.400353 | LOOP [emit-job-header : Print node information] 2025-05-28 18:26:01.531461 | orchestrator | ok: 2025-05-28 18:26:01.531771 | orchestrator | # Node Information 2025-05-28 18:26:01.531830 | orchestrator | Inventory Hostname: orchestrator 2025-05-28 18:26:01.531874 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-28 18:26:01.531913 | orchestrator | Username: zuul-testbed06 2025-05-28 18:26:01.531950 | orchestrator | Distro: Debian 12.11 2025-05-28 18:26:01.531995 | orchestrator | Provider: static-testbed 2025-05-28 18:26:01.532083 | orchestrator | Region: 2025-05-28 18:26:01.532131 | orchestrator | Label: testbed-orchestrator 2025-05-28 18:26:01.532167 | orchestrator | Product Name: OpenStack Nova 2025-05-28 18:26:01.532202 | orchestrator | Interface IP: 81.163.193.140 2025-05-28 18:26:01.562188 | 2025-05-28 18:26:01.562374 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-28 18:26:02.115695 | orchestrator -> localhost | changed 2025-05-28 18:26:02.124551 | 2025-05-28 18:26:02.124695 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-28 18:26:03.197459 | orchestrator -> localhost | changed 2025-05-28 18:26:03.234429 | 2025-05-28 18:26:03.234597 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-28 18:26:03.512344 | orchestrator -> localhost | ok 2025-05-28 18:26:03.522008 | 2025-05-28 18:26:03.522334 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-28 18:26:03.556307 | orchestrator | ok 2025-05-28 18:26:03.573390 | orchestrator | included: /var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-28 18:26:03.581552 | 2025-05-28 18:26:03.581657 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-28 18:26:04.528304 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-28 18:26:04.528818 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/work/aad55cb4c7db435696486539db6e3f7a_id_rsa 2025-05-28 18:26:04.528911 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/work/aad55cb4c7db435696486539db6e3f7a_id_rsa.pub 2025-05-28 18:26:04.528972 | orchestrator -> localhost | The key fingerprint is: 2025-05-28 18:26:04.529027 | orchestrator -> localhost | SHA256:TiYj8hw9ZBjxvKqgh73R2foTivCjXVxqR8gYCgBTwSE zuul-build-sshkey 2025-05-28 18:26:04.529118 | orchestrator -> localhost | The key's randomart image is: 2025-05-28 18:26:04.529191 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-28 18:26:04.529243 | orchestrator -> localhost | |Eo+oo. | 2025-05-28 18:26:04.529292 | orchestrator -> localhost | |.o. = | 2025-05-28 18:26:04.529338 | orchestrator -> localhost | |. . . = | 2025-05-28 18:26:04.529384 | orchestrator -> localhost | |.. + = . | 2025-05-28 18:26:04.529430 | orchestrator -> localhost | |. o = O S | 2025-05-28 18:26:04.529488 | orchestrator -> localhost | |. * X.B | 2025-05-28 18:26:04.529534 | orchestrator -> localhost | |.=..@.o.. | 2025-05-28 18:26:04.529580 | orchestrator -> localhost | |oo*=.o. | 2025-05-28 18:26:04.529629 | orchestrator -> localhost | |oo+o.... | 2025-05-28 18:26:04.529676 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-28 18:26:04.529797 | orchestrator -> localhost | ok: Runtime: 0:00:00.411346 2025-05-28 18:26:04.547912 | 2025-05-28 18:26:04.548120 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-28 18:26:04.585542 | orchestrator | ok 2025-05-28 18:26:04.601250 | orchestrator | included: /var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-28 18:26:04.615427 | 2025-05-28 18:26:04.615702 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-28 18:26:04.642712 | orchestrator | skipping: Conditional result was False 2025-05-28 18:26:04.654622 | 2025-05-28 18:26:04.654921 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-28 18:26:05.245979 | orchestrator | changed 2025-05-28 18:26:05.252515 | 2025-05-28 18:26:05.252625 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-28 18:26:05.517800 | orchestrator | ok 2025-05-28 18:26:05.524609 | 2025-05-28 18:26:05.524737 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-28 18:26:05.930731 | orchestrator | ok 2025-05-28 18:26:05.937171 | 2025-05-28 18:26:05.937311 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-28 18:26:06.390449 | orchestrator | ok 2025-05-28 18:26:06.398867 | 2025-05-28 18:26:06.398997 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-28 18:26:06.424117 | orchestrator | skipping: Conditional result was False 2025-05-28 18:26:06.437353 | 2025-05-28 18:26:06.437547 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-28 18:26:06.878712 | orchestrator -> localhost | changed 2025-05-28 18:26:06.893788 | 2025-05-28 18:26:06.893956 | TASK [add-build-sshkey : Add back temp key] 2025-05-28 18:26:07.249642 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/work/aad55cb4c7db435696486539db6e3f7a_id_rsa (zuul-build-sshkey) 2025-05-28 18:26:07.249978 | orchestrator -> localhost | ok: Runtime: 0:00:00.018223 2025-05-28 18:26:07.268062 | 2025-05-28 18:26:07.268211 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-28 18:26:07.713266 | orchestrator | ok 2025-05-28 18:26:07.722281 | 2025-05-28 18:26:07.722422 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-28 18:26:07.757351 | orchestrator | skipping: Conditional result was False 2025-05-28 18:26:07.815614 | 2025-05-28 18:26:07.815762 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-28 18:26:08.219437 | orchestrator | ok 2025-05-28 18:26:08.236156 | 2025-05-28 18:26:08.236320 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-28 18:26:08.283341 | orchestrator | ok 2025-05-28 18:26:08.298291 | 2025-05-28 18:26:08.298795 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-28 18:26:08.608784 | orchestrator -> localhost | ok 2025-05-28 18:26:08.627691 | 2025-05-28 18:26:08.627850 | TASK [validate-host : Collect information about the host] 2025-05-28 18:26:09.962743 | orchestrator | ok 2025-05-28 18:26:09.982110 | 2025-05-28 18:26:09.982249 | TASK [validate-host : Sanitize hostname] 2025-05-28 18:26:10.045609 | orchestrator | ok 2025-05-28 18:26:10.057964 | 2025-05-28 18:26:10.058309 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-28 18:26:10.673313 | orchestrator -> localhost | changed 2025-05-28 18:26:10.682544 | 2025-05-28 18:26:10.682682 | TASK [validate-host : Collect information about zuul worker] 2025-05-28 18:26:11.122168 | orchestrator | ok 2025-05-28 18:26:11.129667 | 2025-05-28 18:26:11.129803 | TASK [validate-host : Write out all zuul information for each host] 2025-05-28 18:26:11.722737 | orchestrator -> localhost | changed 2025-05-28 18:26:11.745007 | 2025-05-28 18:26:11.745363 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-28 18:26:12.036965 | orchestrator | ok 2025-05-28 18:26:12.046481 | 2025-05-28 18:26:12.046627 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-28 18:26:29.215722 | orchestrator | changed: 2025-05-28 18:26:29.216164 | orchestrator | .d..t...... src/ 2025-05-28 18:26:29.216242 | orchestrator | .d..t...... src/github.com/ 2025-05-28 18:26:29.216298 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-28 18:26:29.216349 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-28 18:26:29.216395 | orchestrator | RedHat.yml 2025-05-28 18:26:29.236203 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-28 18:26:29.236228 | orchestrator | RedHat.yml 2025-05-28 18:26:29.236305 | orchestrator | = 1.53.0"... 2025-05-28 18:26:44.313657 | orchestrator | 18:26:44.313 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-28 18:26:45.674416 | orchestrator | 18:26:45.674 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-28 18:26:46.859542 | orchestrator | 18:26:46.859 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-28 18:26:48.328683 | orchestrator | 18:26:48.328 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-28 18:26:49.346641 | orchestrator | 18:26:49.346 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-28 18:26:50.659848 | orchestrator | 18:26:50.659 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-28 18:26:51.748272 | orchestrator | 18:26:51.747 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-28 18:26:51.748374 | orchestrator | 18:26:51.748 STDOUT terraform: Providers are signed by their developers. 2025-05-28 18:26:51.748503 | orchestrator | 18:26:51.748 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-28 18:26:51.748645 | orchestrator | 18:26:51.748 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-28 18:26:51.748818 | orchestrator | 18:26:51.748 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-28 18:26:51.749590 | orchestrator | 18:26:51.748 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-28 18:26:51.749802 | orchestrator | 18:26:51.749 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-28 18:26:51.749826 | orchestrator | 18:26:51.749 STDOUT terraform: you run "tofu init" in the future. 2025-05-28 18:26:51.749846 | orchestrator | 18:26:51.749 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-28 18:26:51.749987 | orchestrator | 18:26:51.749 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-28 18:26:51.750285 | orchestrator | 18:26:51.749 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-28 18:26:51.750354 | orchestrator | 18:26:51.750 STDOUT terraform: should now work. 2025-05-28 18:26:51.750562 | orchestrator | 18:26:51.750 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-28 18:26:51.750756 | orchestrator | 18:26:51.750 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-28 18:26:51.750945 | orchestrator | 18:26:51.750 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-28 18:26:51.935897 | orchestrator | 18:26:51.935 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-28 18:26:52.158354 | orchestrator | 18:26:52.158 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-28 18:26:52.158478 | orchestrator | 18:26:52.158 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-28 18:26:52.158608 | orchestrator | 18:26:52.158 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-28 18:26:52.158661 | orchestrator | 18:26:52.158 STDOUT terraform: for this configuration. 2025-05-28 18:26:52.410148 | orchestrator | 18:26:52.409 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-28 18:26:52.522234 | orchestrator | 18:26:52.521 STDOUT terraform: ci.auto.tfvars 2025-05-28 18:26:52.533521 | orchestrator | 18:26:52.533 STDOUT terraform: default_custom.tf 2025-05-28 18:26:52.770424 | orchestrator | 18:26:52.770 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-28 18:26:53.801058 | orchestrator | 18:26:53.800 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-28 18:26:54.330195 | orchestrator | 18:26:54.329 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-28 18:26:54.574774 | orchestrator | 18:26:54.574 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-28 18:26:54.574893 | orchestrator | 18:26:54.574 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-28 18:26:54.574906 | orchestrator | 18:26:54.574 STDOUT terraform:  + create 2025-05-28 18:26:54.574915 | orchestrator | 18:26:54.574 STDOUT terraform:  <= read (data resources) 2025-05-28 18:26:54.574982 | orchestrator | 18:26:54.574 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-28 18:26:54.575150 | orchestrator | 18:26:54.575 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-28 18:26:54.575232 | orchestrator | 18:26:54.575 STDOUT terraform:  # (config refers to values not yet known) 2025-05-28 18:26:54.575320 | orchestrator | 18:26:54.575 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-28 18:26:54.575476 | orchestrator | 18:26:54.575 STDOUT terraform:  + checksum = (known after apply) 2025-05-28 18:26:54.575524 | orchestrator | 18:26:54.575 STDOUT terraform:  + created_at = (known after apply) 2025-05-28 18:26:54.575611 | orchestrator | 18:26:54.575 STDOUT terraform:  + file = (known after apply) 2025-05-28 18:26:54.575708 | orchestrator | 18:26:54.575 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.575794 | orchestrator | 18:26:54.575 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.575877 | orchestrator | 18:26:54.575 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-28 18:26:54.575961 | orchestrator | 18:26:54.575 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-28 18:26:54.576021 | orchestrator | 18:26:54.575 STDOUT terraform:  + most_recent = true 2025-05-28 18:26:54.576102 | orchestrator | 18:26:54.576 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.576189 | orchestrator | 18:26:54.576 STDOUT terraform:  + protected = (known after apply) 2025-05-28 18:26:54.576254 | orchestrator | 18:26:54.576 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.576333 | orchestrator | 18:26:54.576 STDOUT terraform:  + schema = (known after apply) 2025-05-28 18:26:54.576437 | orchestrator | 18:26:54.576 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-28 18:26:54.576520 | orchestrator | 18:26:54.576 STDOUT terraform:  + tags = (known after apply) 2025-05-28 18:26:54.576602 | orchestrator | 18:26:54.576 STDOUT terraform:  + updated_at = (known after apply) 2025-05-28 18:26:54.576659 | orchestrator | 18:26:54.576 STDOUT terraform:  } 2025-05-28 18:26:54.576774 | orchestrator | 18:26:54.576 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-28 18:26:54.576857 | orchestrator | 18:26:54.576 STDOUT terraform:  # (config refers to values not yet known) 2025-05-28 18:26:54.576960 | orchestrator | 18:26:54.576 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-28 18:26:54.577041 | orchestrator | 18:26:54.576 STDOUT terraform:  + checksum = (known after apply) 2025-05-28 18:26:54.577120 | orchestrator | 18:26:54.577 STDOUT terraform:  + created_at = (known after apply) 2025-05-28 18:26:54.577204 | orchestrator | 18:26:54.577 STDOUT terraform:  + file = (known after apply) 2025-05-28 18:26:54.577285 | orchestrator | 18:26:54.577 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.577367 | orchestrator | 18:26:54.577 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.577504 | orchestrator | 18:26:54.577 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-28 18:26:54.577575 | orchestrator | 18:26:54.577 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-28 18:26:54.577635 | orchestrator | 18:26:54.577 STDOUT terraform:  + most_recent = true 2025-05-28 18:26:54.577723 | orchestrator | 18:26:54.577 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.577804 | orchestrator | 18:26:54.577 STDOUT terraform:  + protected = (known after apply) 2025-05-28 18:26:54.577886 | orchestrator | 18:26:54.577 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.577971 | orchestrator | 18:26:54.577 STDOUT terraform:  + schema = (known after apply) 2025-05-28 18:26:54.578282 | orchestrator | 18:26:54.577 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-28 18:26:54.578306 | orchestrator | 18:26:54.578 STDOUT terraform:  + tags = (known after apply) 2025-05-28 18:26:54.578452 | orchestrator | 18:26:54.578 STDOUT terraform:  + updated_at = (known after apply) 2025-05-28 18:26:54.578464 | orchestrator | 18:26:54.578 STDOUT terraform:  } 2025-05-28 18:26:54.578518 | orchestrator | 18:26:54.578 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-28 18:26:54.578628 | orchestrator | 18:26:54.578 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-28 18:26:54.578747 | orchestrator | 18:26:54.578 STDOUT terraform:  + content = (known after apply) 2025-05-28 18:26:54.578869 | orchestrator | 18:26:54.578 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 18:26:54.578954 | orchestrator | 18:26:54.578 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 18:26:54.579037 | orchestrator | 18:26:54.578 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 18:26:54.579137 | orchestrator | 18:26:54.579 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 18:26:54.579250 | orchestrator | 18:26:54.579 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 18:26:54.579353 | orchestrator | 18:26:54.579 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 18:26:54.579425 | orchestrator | 18:26:54.579 STDOUT terraform:  + directory_permission = "0777" 2025-05-28 18:26:54.579498 | orchestrator | 18:26:54.579 STDOUT terraform:  + file_permission = "0644" 2025-05-28 18:26:54.579599 | orchestrator | 18:26:54.579 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-28 18:26:54.579712 | orchestrator | 18:26:54.579 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.579726 | orchestrator | 18:26:54.579 STDOUT terraform:  } 2025-05-28 18:26:54.579812 | orchestrator | 18:26:54.579 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-28 18:26:54.579885 | orchestrator | 18:26:54.579 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-28 18:26:54.579985 | orchestrator | 18:26:54.579 STDOUT terraform:  + content = (known after apply) 2025-05-28 18:26:54.580081 | orchestrator | 18:26:54.579 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 18:26:54.580179 | orchestrator | 18:26:54.580 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 18:26:54.580277 | orchestrator | 18:26:54.580 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 18:26:54.580428 | orchestrator | 18:26:54.580 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 18:26:54.580526 | orchestrator | 18:26:54.580 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 18:26:54.580623 | orchestrator | 18:26:54.580 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 18:26:54.580685 | orchestrator | 18:26:54.580 STDOUT terraform:  + directory_permission = "0777" 2025-05-28 18:26:54.580745 | orchestrator | 18:26:54.580 STDOUT terraform:  + file_permission = "0644" 2025-05-28 18:26:54.580824 | orchestrator | 18:26:54.580 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-28 18:26:54.580915 | orchestrator | 18:26:54.580 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.580950 | orchestrator | 18:26:54.580 STDOUT terraform:  } 2025-05-28 18:26:54.581014 | orchestrator | 18:26:54.580 STDOUT terraform:  # local_file.inventory will be created 2025-05-28 18:26:54.581080 | orchestrator | 18:26:54.581 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-28 18:26:54.581170 | orchestrator | 18:26:54.581 STDOUT terraform:  + content = (known after apply) 2025-05-28 18:26:54.581253 | orchestrator | 18:26:54.581 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 18:26:54.581402 | orchestrator | 18:26:54.581 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 18:26:54.581490 | orchestrator | 18:26:54.581 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 18:26:54.581586 | orchestrator | 18:26:54.581 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 18:26:54.581665 | orchestrator | 18:26:54.581 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 18:26:54.581754 | orchestrator | 18:26:54.581 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 18:26:54.581814 | orchestrator | 18:26:54.581 STDOUT terraform:  + directory_permission = "0777" 2025-05-28 18:26:54.581875 | orchestrator | 18:26:54.581 STDOUT terraform:  + file_permission = "0644" 2025-05-28 18:26:54.581951 | orchestrator | 18:26:54.581 STDOUT terraform:  + filename = "inventory.ci" 2025-05-28 18:26:54.582066 | orchestrator | 18:26:54.581 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.582099 | orchestrator | 18:26:54.582 STDOUT terraform:  } 2025-05-28 18:26:54.582175 | orchestrator | 18:26:54.582 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-28 18:26:54.582249 | orchestrator | 18:26:54.582 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-28 18:26:54.582328 | orchestrator | 18:26:54.582 STDOUT terraform:  + content = (sensitive value) 2025-05-28 18:26:54.582427 | orchestrator | 18:26:54.582 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-28 18:26:54.582520 | orchestrator | 18:26:54.582 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-28 18:26:54.582606 | orchestrator | 18:26:54.582 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-28 18:26:54.582693 | orchestrator | 18:26:54.582 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-28 18:26:54.582781 | orchestrator | 18:26:54.582 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-28 18:26:54.582937 | orchestrator | 18:26:54.582 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-28 18:26:54.583156 | orchestrator | 18:26:54.582 STDOUT terraform:  + directory_permission = "0700" 2025-05-28 18:26:54.583169 | orchestrator | 18:26:54.582 STDOUT terraform:  + file_permission = "0600" 2025-05-28 18:26:54.583176 | orchestrator | 18:26:54.583 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-28 18:26:54.583210 | orchestrator | 18:26:54.583 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.583244 | orchestrator | 18:26:54.583 STDOUT terraform:  } 2025-05-28 18:26:54.583317 | orchestrator | 18:26:54.583 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-28 18:26:54.583440 | orchestrator | 18:26:54.583 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-28 18:26:54.583483 | orchestrator | 18:26:54.583 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.583515 | orchestrator | 18:26:54.583 STDOUT terraform:  } 2025-05-28 18:26:54.583637 | orchestrator | 18:26:54.583 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-28 18:26:54.583741 | orchestrator | 18:26:54.583 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-28 18:26:54.583815 | orchestrator | 18:26:54.583 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.583866 | orchestrator | 18:26:54.583 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.583942 | orchestrator | 18:26:54.583 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.584015 | orchestrator | 18:26:54.583 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.584089 | orchestrator | 18:26:54.584 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.584194 | orchestrator | 18:26:54.584 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-28 18:26:54.584275 | orchestrator | 18:26:54.584 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.584313 | orchestrator | 18:26:54.584 STDOUT terraform:  + size = 80 2025-05-28 18:26:54.584431 | orchestrator | 18:26:54.584 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.584482 | orchestrator | 18:26:54.584 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.584511 | orchestrator | 18:26:54.584 STDOUT terraform:  } 2025-05-28 18:26:54.584610 | orchestrator | 18:26:54.584 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-28 18:26:54.584702 | orchestrator | 18:26:54.584 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 18:26:54.584776 | orchestrator | 18:26:54.584 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.584826 | orchestrator | 18:26:54.584 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.584904 | orchestrator | 18:26:54.584 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.584976 | orchestrator | 18:26:54.584 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.585111 | orchestrator | 18:26:54.584 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.585181 | orchestrator | 18:26:54.585 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-28 18:26:54.585256 | orchestrator | 18:26:54.585 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.585304 | orchestrator | 18:26:54.585 STDOUT terraform:  + size = 80 2025-05-28 18:26:54.585377 | orchestrator | 18:26:54.585 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.585445 | orchestrator | 18:26:54.585 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.585455 | orchestrator | 18:26:54.585 STDOUT terraform:  } 2025-05-28 18:26:54.585560 | orchestrator | 18:26:54.585 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-28 18:26:54.585655 | orchestrator | 18:26:54.585 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 18:26:54.585735 | orchestrator | 18:26:54.585 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.585782 | orchestrator | 18:26:54.585 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.585858 | orchestrator | 18:26:54.585 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.585932 | orchestrator | 18:26:54.585 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.586029 | orchestrator | 18:26:54.585 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.586135 | orchestrator | 18:26:54.585 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-28 18:26:54.586196 | orchestrator | 18:26:54.586 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.586238 | orchestrator | 18:26:54.586 STDOUT terraform:  + size = 80 2025-05-28 18:26:54.586288 | orchestrator | 18:26:54.586 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.586339 | orchestrator | 18:26:54.586 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.586373 | orchestrator | 18:26:54.586 STDOUT terraform:  } 2025-05-28 18:26:54.586552 | orchestrator | 18:26:54.586 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-28 18:26:54.586633 | orchestrator | 18:26:54.586 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 18:26:54.586710 | orchestrator | 18:26:54.586 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.586762 | orchestrator | 18:26:54.586 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.586836 | orchestrator | 18:26:54.586 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.586910 | orchestrator | 18:26:54.586 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.586981 | orchestrator | 18:26:54.586 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.587061 | orchestrator | 18:26:54.586 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-28 18:26:54.587127 | orchestrator | 18:26:54.587 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.587166 | orchestrator | 18:26:54.587 STDOUT terraform:  + size = 80 2025-05-28 18:26:54.587206 | orchestrator | 18:26:54.587 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.587252 | orchestrator | 18:26:54.587 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.587260 | orchestrator | 18:26:54.587 STDOUT terraform:  } 2025-05-28 18:26:54.587352 | orchestrator | 18:26:54.587 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-28 18:26:54.587445 | orchestrator | 18:26:54.587 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 18:26:54.587508 | orchestrator | 18:26:54.587 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.587560 | orchestrator | 18:26:54.587 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.587613 | orchestrator | 18:26:54.587 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.587677 | orchestrator | 18:26:54.587 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.587749 | orchestrator | 18:26:54.587 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.587822 | orchestrator | 18:26:54.587 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-28 18:26:54.587886 | orchestrator | 18:26:54.587 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.587930 | orchestrator | 18:26:54.587 STDOUT terraform:  + size = 80 2025-05-28 18:26:54.587963 | orchestrator | 18:26:54.587 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.588003 | orchestrator | 18:26:54.587 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.588010 | orchestrator | 18:26:54.587 STDOUT terraform:  } 2025-05-28 18:26:54.588099 | orchestrator | 18:26:54.587 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-28 18:26:54.588165 | orchestrator | 18:26:54.588 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 18:26:54.588228 | orchestrator | 18:26:54.588 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.588264 | orchestrator | 18:26:54.588 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.588326 | orchestrator | 18:26:54.588 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.588406 | orchestrator | 18:26:54.588 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.588468 | orchestrator | 18:26:54.588 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.588541 | orchestrator | 18:26:54.588 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-28 18:26:54.588602 | orchestrator | 18:26:54.588 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.588642 | orchestrator | 18:26:54.588 STDOUT terraform:  + size = 80 2025-05-28 18:26:54.588690 | orchestrator | 18:26:54.588 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.588738 | orchestrator | 18:26:54.588 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.588745 | orchestrator | 18:26:54.588 STDOUT terraform:  } 2025-05-28 18:26:54.588828 | orchestrator | 18:26:54.588 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-28 18:26:54.588915 | orchestrator | 18:26:54.588 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-28 18:26:54.588976 | orchestrator | 18:26:54.588 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.589017 | orchestrator | 18:26:54.588 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.589083 | orchestrator | 18:26:54.589 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.589145 | orchestrator | 18:26:54.589 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.589202 | orchestrator | 18:26:54.589 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.589280 | orchestrator | 18:26:54.589 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-28 18:26:54.589357 | orchestrator | 18:26:54.589 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.589414 | orchestrator | 18:26:54.589 STDOUT terraform:  + size = 80 2025-05-28 18:26:54.589457 | orchestrator | 18:26:54.589 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.589507 | orchestrator | 18:26:54.589 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.589515 | orchestrator | 18:26:54.589 STDOUT terraform:  } 2025-05-28 18:26:54.589602 | orchestrator | 18:26:54.589 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-28 18:26:54.589677 | orchestrator | 18:26:54.589 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.589733 | orchestrator | 18:26:54.589 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.589765 | orchestrator | 18:26:54.589 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.589824 | orchestrator | 18:26:54.589 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.589893 | orchestrator | 18:26:54.589 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.589950 | orchestrator | 18:26:54.589 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-28 18:26:54.590004 | orchestrator | 18:26:54.589 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.590064 | orchestrator | 18:26:54.589 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.590108 | orchestrator | 18:26:54.590 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.590155 | orchestrator | 18:26:54.590 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.590163 | orchestrator | 18:26:54.590 STDOUT terraform:  } 2025-05-28 18:26:54.590240 | orchestrator | 18:26:54.590 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-28 18:26:54.590322 | orchestrator | 18:26:54.590 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.590392 | orchestrator | 18:26:54.590 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.590439 | orchestrator | 18:26:54.590 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.590491 | orchestrator | 18:26:54.590 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.590551 | orchestrator | 18:26:54.590 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.590617 | orchestrator | 18:26:54.590 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-28 18:26:54.590679 | orchestrator | 18:26:54.590 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.590714 | orchestrator | 18:26:54.590 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.590755 | orchestrator | 18:26:54.590 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.590799 | orchestrator | 18:26:54.590 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.590807 | orchestrator | 18:26:54.590 STDOUT terraform:  } 2025-05-28 18:26:54.590886 | orchestrator | 18:26:54.590 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-28 18:26:54.590959 | orchestrator | 18:26:54.590 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.591020 | orchestrator | 18:26:54.590 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.591062 | orchestrator | 18:26:54.591 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.591122 | orchestrator | 18:26:54.591 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.591182 | orchestrator | 18:26:54.591 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.591246 | orchestrator | 18:26:54.591 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-28 18:26:54.591305 | orchestrator | 18:26:54.591 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.591341 | orchestrator | 18:26:54.591 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.591395 | orchestrator | 18:26:54.591 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.591497 | orchestrator | 18:26:54.591 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.591505 | orchestrator | 18:26:54.591 STDOUT terraform:  } 2025-05-28 18:26:54.591588 | orchestrator | 18:26:54.591 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-28 18:26:54.591661 | orchestrator | 18:26:54.591 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.591720 | orchestrator | 18:26:54.591 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.591763 | orchestrator | 18:26:54.591 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.591818 | orchestrator | 18:26:54.591 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.591873 | orchestrator | 18:26:54.591 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.591931 | orchestrator | 18:26:54.591 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-28 18:26:54.591985 | orchestrator | 18:26:54.591 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.592017 | orchestrator | 18:26:54.591 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.592057 | orchestrator | 18:26:54.592 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.592090 | orchestrator | 18:26:54.592 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.592099 | orchestrator | 18:26:54.592 STDOUT terraform:  } 2025-05-28 18:26:54.592169 | orchestrator | 18:26:54.592 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-28 18:26:54.592233 | orchestrator | 18:26:54.592 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.592283 | orchestrator | 18:26:54.592 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.592321 | orchestrator | 18:26:54.592 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.592533 | orchestrator | 18:26:54.592 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.592603 | orchestrator | 18:26:54.592 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.592620 | orchestrator | 18:26:54.592 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-28 18:26:54.592628 | orchestrator | 18:26:54.592 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.592637 | orchestrator | 18:26:54.592 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.592658 | orchestrator | 18:26:54.592 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.592704 | orchestrator | 18:26:54.592 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.592738 | orchestrator | 18:26:54.592 STDOUT terraform:  } 2025-05-28 18:26:54.592785 | orchestrator | 18:26:54.592 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-28 18:26:54.592847 | orchestrator | 18:26:54.592 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.592900 | orchestrator | 18:26:54.592 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.592930 | orchestrator | 18:26:54.592 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.592986 | orchestrator | 18:26:54.592 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.593045 | orchestrator | 18:26:54.592 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.593100 | orchestrator | 18:26:54.593 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-28 18:26:54.593153 | orchestrator | 18:26:54.593 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.593185 | orchestrator | 18:26:54.593 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.593230 | orchestrator | 18:26:54.593 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.593285 | orchestrator | 18:26:54.593 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.593296 | orchestrator | 18:26:54.593 STDOUT terraform:  } 2025-05-28 18:26:54.593367 | orchestrator | 18:26:54.593 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-28 18:26:54.593478 | orchestrator | 18:26:54.593 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.593532 | orchestrator | 18:26:54.593 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.593550 | orchestrator | 18:26:54.593 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.593611 | orchestrator | 18:26:54.593 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.593663 | orchestrator | 18:26:54.593 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.593719 | orchestrator | 18:26:54.593 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-28 18:26:54.593766 | orchestrator | 18:26:54.593 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.593776 | orchestrator | 18:26:54.593 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.593834 | orchestrator | 18:26:54.593 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.593847 | orchestrator | 18:26:54.593 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.593856 | orchestrator | 18:26:54.593 STDOUT terraform:  } 2025-05-28 18:26:54.593925 | orchestrator | 18:26:54.593 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-28 18:26:54.593982 | orchestrator | 18:26:54.593 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.594034 | orchestrator | 18:26:54.593 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.594082 | orchestrator | 18:26:54.594 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.594129 | orchestrator | 18:26:54.594 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.594175 | orchestrator | 18:26:54.594 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.594227 | orchestrator | 18:26:54.594 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-28 18:26:54.594272 | orchestrator | 18:26:54.594 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.594283 | orchestrator | 18:26:54.594 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.594328 | orchestrator | 18:26:54.594 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.594341 | orchestrator | 18:26:54.594 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.594370 | orchestrator | 18:26:54.594 STDOUT terraform:  } 2025-05-28 18:26:54.594532 | orchestrator | 18:26:54.594 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-28 18:26:54.594567 | orchestrator | 18:26:54.594 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-28 18:26:54.594577 | orchestrator | 18:26:54.594 STDOUT terraform:  + attachment = (known after apply) 2025-05-28 18:26:54.594582 | orchestrator | 18:26:54.594 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.594607 | orchestrator | 18:26:54.594 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.594656 | orchestrator | 18:26:54.594 STDOUT terraform:  + metadata = (known after apply) 2025-05-28 18:26:54.594706 | orchestrator | 18:26:54.594 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-28 18:26:54.594758 | orchestrator | 18:26:54.594 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.594780 | orchestrator | 18:26:54.594 STDOUT terraform:  + size = 20 2025-05-28 18:26:54.594816 | orchestrator | 18:26:54.594 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-28 18:26:54.594849 | orchestrator | 18:26:54.594 STDOUT terraform:  + volume_type = "ssd" 2025-05-28 18:26:54.594858 | orchestrator | 18:26:54.594 STDOUT terraform:  } 2025-05-28 18:26:54.595079 | orchestrator | 18:26:54.594 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-28 18:26:54.595116 | orchestrator | 18:26:54.595 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-28 18:26:54.595162 | orchestrator | 18:26:54.595 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 18:26:54.595207 | orchestrator | 18:26:54.595 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 18:26:54.595252 | orchestrator | 18:26:54.595 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 18:26:54.595298 | orchestrator | 18:26:54.595 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.595330 | orchestrator | 18:26:54.595 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.595361 | orchestrator | 18:26:54.595 STDOUT terraform:  + config_drive = true 2025-05-28 18:26:54.595432 | orchestrator | 18:26:54.595 STDOUT terraform:  + created = (known after apply) 2025-05-28 18:26:54.595488 | orchestrator | 18:26:54.595 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 18:26:54.595530 | orchestrator | 18:26:54.595 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-28 18:26:54.595563 | orchestrator | 18:26:54.595 STDOUT terraform:  + force_delete = false 2025-05-28 18:26:54.595609 | orchestrator | 18:26:54.595 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 18:26:54.595658 | orchestrator | 18:26:54.595 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.595709 | orchestrator | 18:26:54.595 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.595749 | orchestrator | 18:26:54.595 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 18:26:54.595796 | orchestrator | 18:26:54.595 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 18:26:54.595860 | orchestrator | 18:26:54.595 STDOUT terraform:  + name = "testbed-manager" 2025-05-28 18:26:54.595905 | orchestrator | 18:26:54.595 STDOUT terraform:  + power_state = "active" 2025-05-28 18:26:54.595951 | orchestrator | 18:26:54.595 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.595997 | orchestrator | 18:26:54.595 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 18:26:54.596030 | orchestrator | 18:26:54.595 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 18:26:54.596084 | orchestrator | 18:26:54.596 STDOUT terraform:  + updated = (known after apply) 2025-05-28 18:26:54.596142 | orchestrator | 18:26:54.596 STDOUT terraform:  + user_data = (known after apply) 2025-05-28 18:26:54.596178 | orchestrator | 18:26:54.596 STDOUT terraform:  + block_device { 2025-05-28 18:26:54.596215 | orchestrator | 18:26:54.596 STDOUT terraform:  + boot_index = 0 2025-05-28 18:26:54.596256 | orchestrator | 18:26:54.596 STDOUT terraform:  + delete_on_termination = false 2025-05-28 18:26:54.596295 | orchestrator | 18:26:54.596 STDOUT terraform:  + destination_type = "volume" 2025-05-28 18:26:54.596335 | orchestrator | 18:26:54.596 STDOUT terraform:  + multiattach = false 2025-05-28 18:26:54.596375 | orchestrator | 18:26:54.596 STDOUT terraform:  + source_type = "volume" 2025-05-28 18:26:54.596441 | orchestrator | 18:26:54.596 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.596450 | orchestrator | 18:26:54.596 STDOUT terraform:  } 2025-05-28 18:26:54.596480 | orchestrator | 18:26:54.596 STDOUT terraform:  + network { 2025-05-28 18:26:54.596504 | orchestrator | 18:26:54.596 STDOUT terraform:  + access_network = false 2025-05-28 18:26:54.596544 | orchestrator | 18:26:54.596 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 18:26:54.596585 | orchestrator | 18:26:54.596 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 18:26:54.596627 | orchestrator | 18:26:54.596 STDOUT terraform:  + mac = (known after apply) 2025-05-28 18:26:54.596669 | orchestrator | 18:26:54.596 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.596709 | orchestrator | 18:26:54.596 STDOUT terraform:  + port = (known after apply) 2025-05-28 18:26:54.596751 | orchestrator | 18:26:54.596 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.596762 | orchestrator | 18:26:54.596 STDOUT terraform:  } 2025-05-28 18:26:54.596770 | orchestrator | 18:26:54.596 STDOUT terraform:  } 2025-05-28 18:26:54.596834 | orchestrator | 18:26:54.596 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-28 18:26:54.596890 | orchestrator | 18:26:54.596 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 18:26:54.596938 | orchestrator | 18:26:54.596 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 18:26:54.596983 | orchestrator | 18:26:54.596 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 18:26:54.597030 | orchestrator | 18:26:54.596 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 18:26:54.597074 | orchestrator | 18:26:54.597 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.597106 | orchestrator | 18:26:54.597 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.597129 | orchestrator | 18:26:54.597 STDOUT terraform:  + config_drive = true 2025-05-28 18:26:54.597177 | orchestrator | 18:26:54.597 STDOUT terraform:  + created = (known after apply) 2025-05-28 18:26:54.597223 | orchestrator | 18:26:54.597 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 18:26:54.597262 | orchestrator | 18:26:54.597 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 18:26:54.597294 | orchestrator | 18:26:54.597 STDOUT terraform:  + force_delete = false 2025-05-28 18:26:54.597341 | orchestrator | 18:26:54.597 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 18:26:54.597410 | orchestrator | 18:26:54.597 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.597553 | orchestrator | 18:26:54.597 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.597597 | orchestrator | 18:26:54.597 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 18:26:54.597615 | orchestrator | 18:26:54.597 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 18:26:54.597626 | orchestrator | 18:26:54.597 STDOUT terraform:  + name = "testbed-node-0" 2025-05-28 18:26:54.597633 | orchestrator | 18:26:54.597 STDOUT terraform:  + power_state = "active" 2025-05-28 18:26:54.597658 | orchestrator | 18:26:54.597 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.597705 | orchestrator | 18:26:54.597 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 18:26:54.597717 | orchestrator | 18:26:54.597 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 18:26:54.597765 | orchestrator | 18:26:54.597 STDOUT terraform:  + updated = (known after apply) 2025-05-28 18:26:54.597828 | orchestrator | 18:26:54.597 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 18:26:54.597842 | orchestrator | 18:26:54.597 STDOUT terraform:  + block_device { 2025-05-28 18:26:54.597872 | orchestrator | 18:26:54.597 STDOUT terraform:  + boot_index = 0 2025-05-28 18:26:54.597904 | orchestrator | 18:26:54.597 STDOUT terraform:  + delete_on_termination = false 2025-05-28 18:26:54.597934 | orchestrator | 18:26:54.597 STDOUT terraform:  + destination_type = "volume" 2025-05-28 18:26:54.597957 | orchestrator | 18:26:54.597 STDOUT terraform:  + multiattach = false 2025-05-28 18:26:54.598039 | orchestrator | 18:26:54.597 STDOUT terraform:  + source_type = "volume" 2025-05-28 18:26:54.598065 | orchestrator | 18:26:54.597 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.598084 | orchestrator | 18:26:54.598 STDOUT terraform:  } 2025-05-28 18:26:54.598101 | orchestrator | 18:26:54.598 STDOUT terraform:  + network { 2025-05-28 18:26:54.598117 | orchestrator | 18:26:54.598 STDOUT terraform:  + access_network = false 2025-05-28 18:26:54.598159 | orchestrator | 18:26:54.598 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 18:26:54.598198 | orchestrator | 18:26:54.598 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 18:26:54.598239 | orchestrator | 18:26:54.598 STDOUT terraform:  + mac = (known after apply) 2025-05-28 18:26:54.598284 | orchestrator | 18:26:54.598 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.598304 | orchestrator | 18:26:54.598 STDOUT terraform:  + port = (known after apply) 2025-05-28 18:26:54.598362 | orchestrator | 18:26:54.598 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.598376 | orchestrator | 18:26:54.598 STDOUT terraform:  } 2025-05-28 18:26:54.598441 | orchestrator | 18:26:54.598 STDOUT terraform:  } 2025-05-28 18:26:54.598455 | orchestrator | 18:26:54.598 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-28 18:26:54.598502 | orchestrator | 18:26:54.598 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 18:26:54.598547 | orchestrator | 18:26:54.598 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 18:26:54.598569 | orchestrator | 18:26:54.598 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 18:26:54.598682 | orchestrator | 18:26:54.598 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 18:26:54.598699 | orchestrator | 18:26:54.598 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.598725 | orchestrator | 18:26:54.598 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.598754 | orchestrator | 18:26:54.598 STDOUT terraform:  + config_drive = true 2025-05-28 18:26:54.598799 | orchestrator | 18:26:54.598 STDOUT terraform:  + created = (known after apply) 2025-05-28 18:26:54.598842 | orchestrator | 18:26:54.598 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 18:26:54.598878 | orchestrator | 18:26:54.598 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 18:26:54.598909 | orchestrator | 18:26:54.598 STDOUT terraform:  + force_delete = false 2025-05-28 18:26:54.598952 | orchestrator | 18:26:54.598 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 18:26:54.598996 | orchestrator | 18:26:54.598 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.599039 | orchestrator | 18:26:54.598 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.599084 | orchestrator | 18:26:54.599 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 18:26:54.599115 | orchestrator | 18:26:54.599 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 18:26:54.599156 | orchestrator | 18:26:54.599 STDOUT terraform:  + name = "testbed-node-1" 2025-05-28 18:26:54.599188 | orchestrator | 18:26:54.599 STDOUT terraform:  + power_state = "active" 2025-05-28 18:26:54.599231 | orchestrator | 18:26:54.599 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.599274 | orchestrator | 18:26:54.599 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 18:26:54.599303 | orchestrator | 18:26:54.599 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 18:26:54.599348 | orchestrator | 18:26:54.599 STDOUT terraform:  + updated = (known after apply) 2025-05-28 18:26:54.599445 | orchestrator | 18:26:54.599 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 18:26:54.599455 | orchestrator | 18:26:54.599 STDOUT terraform:  + block_device { 2025-05-28 18:26:54.599496 | orchestrator | 18:26:54.599 STDOUT terraform:  + boot_index = 0 2025-05-28 18:26:54.599535 | orchestrator | 18:26:54.599 STDOUT terraform:  + delete_on_termination = false 2025-05-28 18:26:54.599572 | orchestrator | 18:26:54.599 STDOUT terraform:  + destination_type = "volume" 2025-05-28 18:26:54.599605 | orchestrator | 18:26:54.599 STDOUT terraform:  + multiattach = false 2025-05-28 18:26:54.599644 | orchestrator | 18:26:54.599 STDOUT terraform:  + source_type = "volume" 2025-05-28 18:26:54.599684 | orchestrator | 18:26:54.599 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.599692 | orchestrator | 18:26:54.599 STDOUT terraform:  } 2025-05-28 18:26:54.599716 | orchestrator | 18:26:54.599 STDOUT terraform:  + network { 2025-05-28 18:26:54.599740 | orchestrator | 18:26:54.599 STDOUT terraform:  + access_network = false 2025-05-28 18:26:54.599775 | orchestrator | 18:26:54.599 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 18:26:54.599809 | orchestrator | 18:26:54.599 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 18:26:54.599848 | orchestrator | 18:26:54.599 STDOUT terraform:  + mac = (known after apply) 2025-05-28 18:26:54.599884 | orchestrator | 18:26:54.599 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.599919 | orchestrator | 18:26:54.599 STDOUT terraform:  + port = (known after apply) 2025-05-28 18:26:54.599956 | orchestrator | 18:26:54.599 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.599964 | orchestrator | 18:26:54.599 STDOUT terraform:  } 2025-05-28 18:26:54.599972 | orchestrator | 18:26:54.599 STDOUT terraform:  } 2025-05-28 18:26:54.600029 | orchestrator | 18:26:54.599 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-28 18:26:54.600085 | orchestrator | 18:26:54.600 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 18:26:54.600126 | orchestrator | 18:26:54.600 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 18:26:54.600163 | orchestrator | 18:26:54.600 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 18:26:54.600215 | orchestrator | 18:26:54.600 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 18:26:54.600252 | orchestrator | 18:26:54.600 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.600280 | orchestrator | 18:26:54.600 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.600305 | orchestrator | 18:26:54.600 STDOUT terraform:  + config_drive = true 2025-05-28 18:26:54.600438 | orchestrator | 18:26:54.600 STDOUT terraform:  + created = (known after apply) 2025-05-28 18:26:54.600448 | orchestrator | 18:26:54.600 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 18:26:54.600499 | orchestrator | 18:26:54.600 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 18:26:54.600523 | orchestrator | 18:26:54.600 STDOUT terraform:  + force_delete = false 2025-05-28 18:26:54.600565 | orchestrator | 18:26:54.600 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 18:26:54.600607 | orchestrator | 18:26:54.600 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.600651 | orchestrator | 18:26:54.600 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.600695 | orchestrator | 18:26:54.600 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 18:26:54.600708 | orchestrator | 18:26:54.600 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 18:26:54.600754 | orchestrator | 18:26:54.600 STDOUT terraform:  + name = "testbed-node-2" 2025-05-28 18:26:54.600768 | orchestrator | 18:26:54.600 STDOUT terraform:  + power_state = "active" 2025-05-28 18:26:54.600814 | orchestrator | 18:26:54.600 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.600855 | orchestrator | 18:26:54.600 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 18:26:54.600876 | orchestrator | 18:26:54.600 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 18:26:54.600917 | orchestrator | 18:26:54.600 STDOUT terraform:  + updated = (known after apply) 2025-05-28 18:26:54.600974 | orchestrator | 18:26:54.600 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 18:26:54.600982 | orchestrator | 18:26:54.600 STDOUT terraform:  + block_device { 2025-05-28 18:26:54.601019 | orchestrator | 18:26:54.600 STDOUT terraform:  + boot_index = 0 2025-05-28 18:26:54.601053 | orchestrator | 18:26:54.601 STDOUT terraform:  + delete_on_termination = false 2025-05-28 18:26:54.601085 | orchestrator | 18:26:54.601 STDOUT terraform:  + destination_type = "volume" 2025-05-28 18:26:54.601117 | orchestrator | 18:26:54.601 STDOUT terraform:  + multiattach = false 2025-05-28 18:26:54.601154 | orchestrator | 18:26:54.601 STDOUT terraform:  + source_type = "volume" 2025-05-28 18:26:54.601194 | orchestrator | 18:26:54.601 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.601202 | orchestrator | 18:26:54.601 STDOUT terraform:  } 2025-05-28 18:26:54.601212 | orchestrator | 18:26:54.601 STDOUT terraform:  + network { 2025-05-28 18:26:54.601247 | orchestrator | 18:26:54.601 STDOUT terraform:  + access_network = false 2025-05-28 18:26:54.601276 | orchestrator | 18:26:54.601 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 18:26:54.601314 | orchestrator | 18:26:54.601 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 18:26:54.601350 | orchestrator | 18:26:54.601 STDOUT terraform:  + mac = (known after apply) 2025-05-28 18:26:54.601399 | orchestrator | 18:26:54.601 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.601448 | orchestrator | 18:26:54.601 STDOUT terraform:  + port = (known after apply) 2025-05-28 18:26:54.601485 | orchestrator | 18:26:54.601 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.601493 | orchestrator | 18:26:54.601 STDOUT terraform:  } 2025-05-28 18:26:54.601570 | orchestrator | 18:26:54.601 STDOUT terraform:  } 2025-05-28 18:26:54.601621 | orchestrator | 18:26:54.601 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-28 18:26:54.601672 | orchestrator | 18:26:54.601 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 18:26:54.601711 | orchestrator | 18:26:54.601 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 18:26:54.601754 | orchestrator | 18:26:54.601 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 18:26:54.601800 | orchestrator | 18:26:54.601 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 18:26:54.601834 | orchestrator | 18:26:54.601 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.601857 | orchestrator | 18:26:54.601 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.601867 | orchestrator | 18:26:54.601 STDOUT terraform:  + config_drive = true 2025-05-28 18:26:54.601924 | orchestrator | 18:26:54.601 STDOUT terraform:  + created = (known after apply) 2025-05-28 18:26:54.601955 | orchestrator | 18:26:54.601 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 18:26:54.601985 | orchestrator | 18:26:54.601 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 18:26:54.601997 | orchestrator | 18:26:54.601 STDOUT terraform:  + force_delete = false 2025-05-28 18:26:54.602073 | orchestrator | 18:26:54.601 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 18:26:54.602101 | orchestrator | 18:26:54.602 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.602146 | orchestrator | 18:26:54.602 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.602185 | orchestrator | 18:26:54.602 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 18:26:54.602208 | orchestrator | 18:26:54.602 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 18:26:54.602246 | orchestrator | 18:26:54.602 STDOUT terraform:  + name = "testbed-node-3" 2025-05-28 18:26:54.602270 | orchestrator | 18:26:54.602 STDOUT terraform:  + power_state = "active" 2025-05-28 18:26:54.602312 | orchestrator | 18:26:54.602 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.602351 | orchestrator | 18:26:54.602 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 18:26:54.602375 | orchestrator | 18:26:54.602 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 18:26:54.602456 | orchestrator | 18:26:54.602 STDOUT terraform:  + updated = (known after apply) 2025-05-28 18:26:54.602506 | orchestrator | 18:26:54.602 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 18:26:54.602514 | orchestrator | 18:26:54.602 STDOUT terraform:  + block_device { 2025-05-28 18:26:54.602550 | orchestrator | 18:26:54.602 STDOUT terraform:  + boot_index = 0 2025-05-28 18:26:54.602584 | orchestrator | 18:26:54.602 STDOUT terraform:  + delete_on_termination = false 2025-05-28 18:26:54.602618 | orchestrator | 18:26:54.602 STDOUT terraform:  + destination_type = "volume" 2025-05-28 18:26:54.602651 | orchestrator | 18:26:54.602 STDOUT terraform:  + multiattach = false 2025-05-28 18:26:54.602685 | orchestrator | 18:26:54.602 STDOUT terraform:  + source_type = "volume" 2025-05-28 18:26:54.602729 | orchestrator | 18:26:54.602 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.602737 | orchestrator | 18:26:54.602 STDOUT terraform:  } 2025-05-28 18:26:54.602757 | orchestrator | 18:26:54.602 STDOUT terraform:  + network { 2025-05-28 18:26:54.602781 | orchestrator | 18:26:54.602 STDOUT terraform:  + access_network = false 2025-05-28 18:26:54.602815 | orchestrator | 18:26:54.602 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 18:26:54.602850 | orchestrator | 18:26:54.602 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 18:26:54.602886 | orchestrator | 18:26:54.602 STDOUT terraform:  + mac = (known after apply) 2025-05-28 18:26:54.602922 | orchestrator | 18:26:54.602 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.602957 | orchestrator | 18:26:54.602 STDOUT terraform:  + port = (known after apply) 2025-05-28 18:26:54.602991 | orchestrator | 18:26:54.602 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.602997 | orchestrator | 18:26:54.602 STDOUT terraform:  } 2025-05-28 18:26:54.603004 | orchestrator | 18:26:54.602 STDOUT terraform:  } 2025-05-28 18:26:54.603059 | orchestrator | 18:26:54.603 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-28 18:26:54.603106 | orchestrator | 18:26:54.603 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 18:26:54.603143 | orchestrator | 18:26:54.603 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 18:26:54.603176 | orchestrator | 18:26:54.603 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 18:26:54.603213 | orchestrator | 18:26:54.603 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 18:26:54.603252 | orchestrator | 18:26:54.603 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.603284 | orchestrator | 18:26:54.603 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.603304 | orchestrator | 18:26:54.603 STDOUT terraform:  + config_drive = true 2025-05-28 18:26:54.603357 | orchestrator | 18:26:54.603 STDOUT terraform:  + created = (known after apply) 2025-05-28 18:26:54.603426 | orchestrator | 18:26:54.603 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 18:26:54.603443 | orchestrator | 18:26:54.603 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 18:26:54.603465 | orchestrator | 18:26:54.603 STDOUT terraform:  + force_delete = false 2025-05-28 18:26:54.603520 | orchestrator | 18:26:54.603 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 18:26:54.603529 | orchestrator | 18:26:54.603 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.603573 | orchestrator | 18:26:54.603 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.603608 | orchestrator | 18:26:54.603 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 18:26:54.603630 | orchestrator | 18:26:54.603 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 18:26:54.603665 | orchestrator | 18:26:54.603 STDOUT terraform:  + name = "testbed-node-4" 2025-05-28 18:26:54.603689 | orchestrator | 18:26:54.603 STDOUT terraform:  + power_state = "active" 2025-05-28 18:26:54.603730 | orchestrator | 18:26:54.603 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.603766 | orchestrator | 18:26:54.603 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 18:26:54.603776 | orchestrator | 18:26:54.603 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 18:26:54.603837 | orchestrator | 18:26:54.603 STDOUT terraform:  + updated = (known after apply) 2025-05-28 18:26:54.603894 | orchestrator | 18:26:54.603 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 18:26:54.603907 | orchestrator | 18:26:54.603 STDOUT terraform:  + block_device { 2025-05-28 18:26:54.603942 | orchestrator | 18:26:54.603 STDOUT terraform:  + boot_index = 0 2025-05-28 18:26:54.603950 | orchestrator | 18:26:54.603 STDOUT terraform:  + delete_on_termination = false 2025-05-28 18:26:54.603991 | orchestrator | 18:26:54.603 STDOUT terraform:  + destination_type = "volume" 2025-05-28 18:26:54.604029 | orchestrator | 18:26:54.603 STDOUT terraform:  + multiattach = false 2025-05-28 18:26:54.604050 | orchestrator | 18:26:54.604 STDOUT terraform:  + source_type = "volume" 2025-05-28 18:26:54.604095 | orchestrator | 18:26:54.604 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.604103 | orchestrator | 18:26:54.604 STDOUT terraform:  } 2025-05-28 18:26:54.604109 | orchestrator | 18:26:54.604 STDOUT terraform:  + network { 2025-05-28 18:26:54.604142 | orchestrator | 18:26:54.604 STDOUT terraform:  + access_network = false 2025-05-28 18:26:54.604178 | orchestrator | 18:26:54.604 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 18:26:54.604210 | orchestrator | 18:26:54.604 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 18:26:54.604238 | orchestrator | 18:26:54.604 STDOUT terraform:  + mac = (known after apply) 2025-05-28 18:26:54.604272 | orchestrator | 18:26:54.604 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.604303 | orchestrator | 18:26:54.604 STDOUT terraform:  + port = (known after apply) 2025-05-28 18:26:54.604339 | orchestrator | 18:26:54.604 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.604355 | orchestrator | 18:26:54.604 STDOUT terraform:  } 2025-05-28 18:26:54.604363 | orchestrator | 18:26:54.604 STDOUT terraform:  } 2025-05-28 18:26:54.604423 | orchestrator | 18:26:54.604 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-28 18:26:54.604464 | orchestrator | 18:26:54.604 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-28 18:26:54.604501 | orchestrator | 18:26:54.604 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-28 18:26:54.604539 | orchestrator | 18:26:54.604 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-28 18:26:54.604574 | orchestrator | 18:26:54.604 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-28 18:26:54.604613 | orchestrator | 18:26:54.604 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.604637 | orchestrator | 18:26:54.604 STDOUT terraform:  + availability_zone = "nova" 2025-05-28 18:26:54.604649 | orchestrator | 18:26:54.604 STDOUT terraform:  + config_drive = true 2025-05-28 18:26:54.604695 | orchestrator | 18:26:54.604 STDOUT terraform:  + created = (known after apply) 2025-05-28 18:26:54.604722 | orchestrator | 18:26:54.604 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-28 18:26:54.604750 | orchestrator | 18:26:54.604 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-28 18:26:54.604761 | orchestrator | 18:26:54.604 STDOUT terraform:  + force_delete = false 2025-05-28 18:26:54.610555 | orchestrator | 18:26:54.604 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-28 18:26:54.610622 | orchestrator | 18:26:54.604 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610634 | orchestrator | 18:26:54.604 STDOUT terraform:  + image_id = (known after apply) 2025-05-28 18:26:54.610638 | orchestrator | 18:26:54.604 STDOUT terraform:  + image_name = (known after apply) 2025-05-28 18:26:54.610643 | orchestrator | 18:26:54.604 STDOUT terraform:  + key_pair = "testbed" 2025-05-28 18:26:54.610647 | orchestrator | 18:26:54.604 STDOUT terraform:  + name = "testbed-node-5" 2025-05-28 18:26:54.610651 | orchestrator | 18:26:54.604 STDOUT terraform:  + power_state = "active" 2025-05-28 18:26:54.610655 | orchestrator | 18:26:54.604 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.610659 | orchestrator | 18:26:54.604 STDOUT terraform:  + security_groups = (known after apply) 2025-05-28 18:26:54.610663 | orchestrator | 18:26:54.605 STDOUT terraform:  + stop_before_destroy = false 2025-05-28 18:26:54.610666 | orchestrator | 18:26:54.605 STDOUT terraform:  + updated = (known after apply) 2025-05-28 18:26:54.610670 | orchestrator | 18:26:54.605 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-28 18:26:54.610674 | orchestrator | 18:26:54.605 STDOUT terraform:  + block_device { 2025-05-28 18:26:54.610678 | orchestrator | 18:26:54.605 STDOUT terraform:  + boot_index = 0 2025-05-28 18:26:54.610682 | orchestrator | 18:26:54.605 STDOUT terraform:  + delete_on_termination = false 2025-05-28 18:26:54.610686 | orchestrator | 18:26:54.605 STDOUT terraform:  + destination_type = "volume" 2025-05-28 18:26:54.610701 | orchestrator | 18:26:54.605 STDOUT terraform:  + multiattach = false 2025-05-28 18:26:54.610705 | orchestrator | 18:26:54.605 STDOUT terraform:  + source_type = "volume" 2025-05-28 18:26:54.610709 | orchestrator | 18:26:54.605 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.610714 | orchestrator | 18:26:54.605 STDOUT terraform:  } 2025-05-28 18:26:54.610717 | orchestrator | 18:26:54.605 STDOUT terraform:  + network { 2025-05-28 18:26:54.610721 | orchestrator | 18:26:54.605 STDOUT terraform:  + access_network = false 2025-05-28 18:26:54.610725 | orchestrator | 18:26:54.605 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-28 18:26:54.610729 | orchestrator | 18:26:54.605 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-28 18:26:54.610733 | orchestrator | 18:26:54.605 STDOUT terraform:  + mac = (known after apply) 2025-05-28 18:26:54.610736 | orchestrator | 18:26:54.605 STDOUT terraform:  + name = (known after apply) 2025-05-28 18:26:54.610740 | orchestrator | 18:26:54.605 STDOUT terraform:  + port = (known after apply) 2025-05-28 18:26:54.610744 | orchestrator | 18:26:54.605 STDOUT terraform:  + uuid = (known after apply) 2025-05-28 18:26:54.610748 | orchestrator | 18:26:54.605 STDOUT terraform:  } 2025-05-28 18:26:54.610752 | orchestrator | 18:26:54.605 STDOUT terraform:  } 2025-05-28 18:26:54.610755 | orchestrator | 18:26:54.605 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-28 18:26:54.610759 | orchestrator | 18:26:54.605 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-28 18:26:54.610763 | orchestrator | 18:26:54.605 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-28 18:26:54.610767 | orchestrator | 18:26:54.605 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610770 | orchestrator | 18:26:54.605 STDOUT terraform:  + name = "testbed" 2025-05-28 18:26:54.610774 | orchestrator | 18:26:54.605 STDOUT terraform:  + private_key = (sensitive value) 2025-05-28 18:26:54.610778 | orchestrator | 18:26:54.605 STDOUT terraform:  + public_key = (known after apply) 2025-05-28 18:26:54.610782 | orchestrator | 18:26:54.605 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.610800 | orchestrator | 18:26:54.605 STDOUT terraform:  + user_id = (known after apply) 2025-05-28 18:26:54.610804 | orchestrator | 18:26:54.605 STDOUT terraform:  } 2025-05-28 18:26:54.610808 | orchestrator | 18:26:54.605 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-28 18:26:54.610813 | orchestrator | 18:26:54.605 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.610817 | orchestrator | 18:26:54.605 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.610821 | orchestrator | 18:26:54.605 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610825 | orchestrator | 18:26:54.605 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.610828 | orchestrator | 18:26:54.605 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.610832 | orchestrator | 18:26:54.605 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.610839 | orchestrator | 18:26:54.606 STDOUT terraform:  } 2025-05-28 18:26:54.610843 | orchestrator | 18:26:54.606 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-28 18:26:54.610850 | orchestrator | 18:26:54.606 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.610854 | orchestrator | 18:26:54.606 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.610858 | orchestrator | 18:26:54.606 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610862 | orchestrator | 18:26:54.606 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.610866 | orchestrator | 18:26:54.606 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.610869 | orchestrator | 18:26:54.606 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.610873 | orchestrator | 18:26:54.606 STDOUT terraform:  } 2025-05-28 18:26:54.610877 | orchestrator | 18:26:54.606 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-28 18:26:54.610881 | orchestrator | 18:26:54.606 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.610884 | orchestrator | 18:26:54.606 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.610888 | orchestrator | 18:26:54.606 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610892 | orchestrator | 18:26:54.606 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.610896 | orchestrator | 18:26:54.606 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.610900 | orchestrator | 18:26:54.606 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.610903 | orchestrator | 18:26:54.606 STDOUT terraform:  } 2025-05-28 18:26:54.610907 | orchestrator | 18:26:54.606 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-28 18:26:54.610911 | orchestrator | 18:26:54.606 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.610915 | orchestrator | 18:26:54.606 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.610918 | orchestrator | 18:26:54.606 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610922 | orchestrator | 18:26:54.606 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.610926 | orchestrator | 18:26:54.606 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.610930 | orchestrator | 18:26:54.606 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.610933 | orchestrator | 18:26:54.606 STDOUT terraform:  } 2025-05-28 18:26:54.610937 | orchestrator | 18:26:54.606 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-28 18:26:54.610941 | orchestrator | 18:26:54.606 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.610945 | orchestrator | 18:26:54.606 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.610957 | orchestrator | 18:26:54.606 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610965 | orchestrator | 18:26:54.606 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.610969 | orchestrator | 18:26:54.606 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.610975 | orchestrator | 18:26:54.606 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.610979 | orchestrator | 18:26:54.606 STDOUT terraform:  } 2025-05-28 18:26:54.610983 | orchestrator | 18:26:54.607 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-28 18:26:54.610987 | orchestrator | 18:26:54.607 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.610991 | orchestrator | 18:26:54.607 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.610994 | orchestrator | 18:26:54.607 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.610998 | orchestrator | 18:26:54.607 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.611002 | orchestrator | 18:26:54.607 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611006 | orchestrator | 18:26:54.607 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.611009 | orchestrator | 18:26:54.607 STDOUT terraform:  } 2025-05-28 18:26:54.611013 | orchestrator | 18:26:54.607 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-28 18:26:54.611017 | orchestrator | 18:26:54.607 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.611021 | orchestrator | 18:26:54.607 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.611025 | orchestrator | 18:26:54.607 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611028 | orchestrator | 18:26:54.607 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.611032 | orchestrator | 18:26:54.607 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611036 | orchestrator | 18:26:54.607 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.611040 | orchestrator | 18:26:54.607 STDOUT terraform:  } 2025-05-28 18:26:54.611044 | orchestrator | 18:26:54.607 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-28 18:26:54.611047 | orchestrator | 18:26:54.607 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.611051 | orchestrator | 18:26:54.607 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.611055 | orchestrator | 18:26:54.607 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611059 | orchestrator | 18:26:54.607 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.611062 | orchestrator | 18:26:54.607 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611066 | orchestrator | 18:26:54.607 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.611070 | orchestrator | 18:26:54.607 STDOUT terraform:  } 2025-05-28 18:26:54.611074 | orchestrator | 18:26:54.607 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-28 18:26:54.611081 | orchestrator | 18:26:54.607 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-28 18:26:54.611084 | orchestrator | 18:26:54.607 STDOUT terraform:  + device = (known after apply) 2025-05-28 18:26:54.611088 | orchestrator | 18:26:54.607 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611092 | orchestrator | 18:26:54.607 STDOUT terraform:  + instance_id = (known after apply) 2025-05-28 18:26:54.611096 | orchestrator | 18:26:54.607 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611100 | orchestrator | 18:26:54.607 STDOUT terraform:  + volume_id = (known after apply) 2025-05-28 18:26:54.611103 | orchestrator | 18:26:54.607 STDOUT terraform:  } 2025-05-28 18:26:54.611111 | orchestrator | 18:26:54.607 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-28 18:26:54.611117 | orchestrator | 18:26:54.607 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-28 18:26:54.611121 | orchestrator | 18:26:54.608 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-28 18:26:54.611125 | orchestrator | 18:26:54.608 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-28 18:26:54.611128 | orchestrator | 18:26:54.608 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611132 | orchestrator | 18:26:54.608 STDOUT terraform:  + port_id = (known after apply) 2025-05-28 18:26:54.611136 | orchestrator | 18:26:54.608 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611140 | orchestrator | 18:26:54.608 STDOUT terraform:  } 2025-05-28 18:26:54.611143 | orchestrator | 18:26:54.608 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-28 18:26:54.611148 | orchestrator | 18:26:54.608 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-28 18:26:54.611152 | orchestrator | 18:26:54.608 STDOUT terraform:  + address = (known after apply) 2025-05-28 18:26:54.611156 | orchestrator | 18:26:54.608 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.611159 | orchestrator | 18:26:54.608 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-28 18:26:54.611163 | orchestrator | 18:26:54.608 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.611167 | orchestrator | 18:26:54.608 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-28 18:26:54.611171 | orchestrator | 18:26:54.608 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611175 | orchestrator | 18:26:54.608 STDOUT terraform:  + pool = "public" 2025-05-28 18:26:54.611178 | orchestrator | 18:26:54.608 STDOUT terraform:  + port_id = (known after apply) 2025-05-28 18:26:54.611182 | orchestrator | 18:26:54.608 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611189 | orchestrator | 18:26:54.608 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.611193 | orchestrator | 18:26:54.608 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.611196 | orchestrator | 18:26:54.608 STDOUT terraform:  } 2025-05-28 18:26:54.611200 | orchestrator | 18:26:54.608 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-28 18:26:54.611207 | orchestrator | 18:26:54.608 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-28 18:26:54.611211 | orchestrator | 18:26:54.608 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.611215 | orchestrator | 18:26:54.608 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.611219 | orchestrator | 18:26:54.608 STDOUT terraform:  + availability_zone_hints = [ 2025-05-28 18:26:54.611223 | orchestrator | 18:26:54.608 STDOUT terraform:  + "nova", 2025-05-28 18:26:54.611226 | orchestrator | 18:26:54.608 STDOUT terraform:  ] 2025-05-28 18:26:54.611230 | orchestrator | 18:26:54.608 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-28 18:26:54.611234 | orchestrator | 18:26:54.608 STDOUT terraform:  + external = (known after apply) 2025-05-28 18:26:54.611238 | orchestrator | 18:26:54.608 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611242 | orchestrator | 18:26:54.608 STDOUT terraform:  + mtu = (known after apply) 2025-05-28 18:26:54.611245 | orchestrator | 18:26:54.608 STDOUT terraform:  + name = "net-testbed-management" 2025-05-28 18:26:54.611249 | orchestrator | 18:26:54.608 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.611253 | orchestrator | 18:26:54.608 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.611257 | orchestrator | 18:26:54.608 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611264 | orchestrator | 18:26:54.608 STDOUT terraform:  + shared = (known after apply) 2025-05-28 18:26:54.611268 | orchestrator | 18:26:54.608 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.611272 | orchestrator | 18:26:54.609 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-28 18:26:54.611279 | orchestrator | 18:26:54.609 STDOUT terraform:  + segments (known after apply) 2025-05-28 18:26:54.611282 | orchestrator | 18:26:54.609 STDOUT terraform:  } 2025-05-28 18:26:54.611286 | orchestrator | 18:26:54.609 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-28 18:26:54.611290 | orchestrator | 18:26:54.609 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-28 18:26:54.611294 | orchestrator | 18:26:54.609 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.611298 | orchestrator | 18:26:54.609 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 18:26:54.611302 | orchestrator | 18:26:54.609 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 18:26:54.611305 | orchestrator | 18:26:54.609 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.611309 | orchestrator | 18:26:54.609 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 18:26:54.611313 | orchestrator | 18:26:54.609 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 18:26:54.611317 | orchestrator | 18:26:54.609 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 18:26:54.611320 | orchestrator | 18:26:54.609 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.611327 | orchestrator | 18:26:54.609 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611331 | orchestrator | 18:26:54.609 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 18:26:54.611335 | orchestrator | 18:26:54.609 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.611339 | orchestrator | 18:26:54.609 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.611342 | orchestrator | 18:26:54.610 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.611346 | orchestrator | 18:26:54.610 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611350 | orchestrator | 18:26:54.610 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 18:26:54.611354 | orchestrator | 18:26:54.610 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.611357 | orchestrator | 18:26:54.610 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.611361 | orchestrator | 18:26:54.610 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 18:26:54.611365 | orchestrator | 18:26:54.610 STDOUT terraform:  } 2025-05-28 18:26:54.611369 | orchestrator | 18:26:54.610 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.611373 | orchestrator | 18:26:54.610 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 18:26:54.611377 | orchestrator | 18:26:54.610 STDOUT terraform:  } 2025-05-28 18:26:54.611400 | orchestrator | 18:26:54.610 STDOUT terraform:  + binding (known after apply) 2025-05-28 18:26:54.611404 | orchestrator | 18:26:54.610 STDOUT terraform:  + fixed_ip { 2025-05-28 18:26:54.611408 | orchestrator | 18:26:54.610 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-28 18:26:54.611412 | orchestrator | 18:26:54.610 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.611418 | orchestrator | 18:26:54.610 STDOUT terraform:  } 2025-05-28 18:26:54.611424 | orchestrator | 18:26:54.610 STDOUT terraform:  } 2025-05-28 18:26:54.611430 | orchestrator | 18:26:54.610 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-28 18:26:54.611437 | orchestrator | 18:26:54.610 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 18:26:54.611442 | orchestrator | 18:26:54.610 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.611457 | orchestrator | 18:26:54.610 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 18:26:54.611467 | orchestrator | 18:26:54.610 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 18:26:54.611473 | orchestrator | 18:26:54.610 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.611479 | orchestrator | 18:26:54.611 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 18:26:54.611485 | orchestrator | 18:26:54.611 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 18:26:54.611491 | orchestrator | 18:26:54.611 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 18:26:54.611505 | orchestrator | 18:26:54.611 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.611511 | orchestrator | 18:26:54.611 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.611517 | orchestrator | 18:26:54.611 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 18:26:54.611523 | orchestrator | 18:26:54.611 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.611529 | orchestrator | 18:26:54.611 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.611535 | orchestrator | 18:26:54.611 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.611542 | orchestrator | 18:26:54.611 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.611548 | orchestrator | 18:26:54.611 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 18:26:54.611558 | orchestrator | 18:26:54.611 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.611565 | orchestrator | 18:26:54.611 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.611573 | orchestrator | 18:26:54.611 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 18:26:54.611577 | orchestrator | 18:26:54.611 STDOUT terraform:  } 2025-05-28 18:26:54.611581 | orchestrator | 18:26:54.611 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.611584 | orchestrator | 18:26:54.611 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 18:26:54.611590 | orchestrator | 18:26:54.611 STDOUT terraform:  } 2025-05-28 18:26:54.611594 | orchestrator | 18:26:54.611 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.611652 | orchestrator | 18:26:54.611 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 18:26:54.611659 | orchestrator | 18:26:54.611 STDOUT terraform:  } 2025-05-28 18:26:54.611686 | orchestrator | 18:26:54.611 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.611730 | orchestrator | 18:26:54.611 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 18:26:54.611736 | orchestrator | 18:26:54.611 STDOUT terraform:  } 2025-05-28 18:26:54.611741 | orchestrator | 18:26:54.611 STDOUT terraform:  + binding (known after apply) 2025-05-28 18:26:54.611770 | orchestrator | 18:26:54.611 STDOUT terraform:  + fixed_ip { 2025-05-28 18:26:54.611791 | orchestrator | 18:26:54.611 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-28 18:26:54.611821 | orchestrator | 18:26:54.611 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.611829 | orchestrator | 18:26:54.611 STDOUT terraform:  } 2025-05-28 18:26:54.611847 | orchestrator | 18:26:54.611 STDOUT terraform:  } 2025-05-28 18:26:54.611897 | orchestrator | 18:26:54.611 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-28 18:26:54.611989 | orchestrator | 18:26:54.611 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 18:26:54.612026 | orchestrator | 18:26:54.611 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.612069 | orchestrator | 18:26:54.612 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 18:26:54.612101 | orchestrator | 18:26:54.612 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 18:26:54.612149 | orchestrator | 18:26:54.612 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.612176 | orchestrator | 18:26:54.612 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 18:26:54.612218 | orchestrator | 18:26:54.612 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 18:26:54.612252 | orchestrator | 18:26:54.612 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 18:26:54.612288 | orchestrator | 18:26:54.612 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.612325 | orchestrator | 18:26:54.612 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.612365 | orchestrator | 18:26:54.612 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 18:26:54.612419 | orchestrator | 18:26:54.612 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.612456 | orchestrator | 18:26:54.612 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.612494 | orchestrator | 18:26:54.612 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.612532 | orchestrator | 18:26:54.612 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.612572 | orchestrator | 18:26:54.612 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 18:26:54.612607 | orchestrator | 18:26:54.612 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.612618 | orchestrator | 18:26:54.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.612657 | orchestrator | 18:26:54.612 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 18:26:54.612668 | orchestrator | 18:26:54.612 STDOUT terraform:  } 2025-05-28 18:26:54.612689 | orchestrator | 18:26:54.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.612719 | orchestrator | 18:26:54.612 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 18:26:54.612728 | orchestrator | 18:26:54.612 STDOUT terraform:  } 2025-05-28 18:26:54.612753 | orchestrator | 18:26:54.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.612783 | orchestrator | 18:26:54.612 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 18:26:54.612794 | orchestrator | 18:26:54.612 STDOUT terraform:  } 2025-05-28 18:26:54.612818 | orchestrator | 18:26:54.612 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.612848 | orchestrator | 18:26:54.612 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 18:26:54.612859 | orchestrator | 18:26:54.612 STDOUT terraform:  } 2025-05-28 18:26:54.612882 | orchestrator | 18:26:54.612 STDOUT terraform:  + binding (known after apply) 2025-05-28 18:26:54.612892 | orchestrator | 18:26:54.612 STDOUT terraform:  + fixed_ip { 2025-05-28 18:26:54.612919 | orchestrator | 18:26:54.612 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-28 18:26:54.612947 | orchestrator | 18:26:54.612 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.612954 | orchestrator | 18:26:54.612 STDOUT terraform:  } 2025-05-28 18:26:54.612968 | orchestrator | 18:26:54.612 STDOUT terraform:  } 2025-05-28 18:26:54.613017 | orchestrator | 18:26:54.612 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-28 18:26:54.613066 | orchestrator | 18:26:54.613 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 18:26:54.613107 | orchestrator | 18:26:54.613 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.613143 | orchestrator | 18:26:54.613 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 18:26:54.613180 | orchestrator | 18:26:54.613 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 18:26:54.613217 | orchestrator | 18:26:54.613 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.613254 | orchestrator | 18:26:54.613 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 18:26:54.613290 | orchestrator | 18:26:54.613 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 18:26:54.613328 | orchestrator | 18:26:54.613 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 18:26:54.613367 | orchestrator | 18:26:54.613 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.613447 | orchestrator | 18:26:54.613 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.613455 | orchestrator | 18:26:54.613 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 18:26:54.613495 | orchestrator | 18:26:54.613 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.613529 | orchestrator | 18:26:54.613 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.613567 | orchestrator | 18:26:54.613 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.613606 | orchestrator | 18:26:54.613 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.613643 | orchestrator | 18:26:54.613 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 18:26:54.613679 | orchestrator | 18:26:54.613 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.613701 | orchestrator | 18:26:54.613 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.613733 | orchestrator | 18:26:54.613 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 18:26:54.613739 | orchestrator | 18:26:54.613 STDOUT terraform:  } 2025-05-28 18:26:54.613765 | orchestrator | 18:26:54.613 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.613795 | orchestrator | 18:26:54.613 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 18:26:54.613801 | orchestrator | 18:26:54.613 STDOUT terraform:  } 2025-05-28 18:26:54.613827 | orchestrator | 18:26:54.613 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.613857 | orchestrator | 18:26:54.613 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 18:26:54.613863 | orchestrator | 18:26:54.613 STDOUT terraform:  } 2025-05-28 18:26:54.613888 | orchestrator | 18:26:54.613 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.613920 | orchestrator | 18:26:54.613 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 18:26:54.613935 | orchestrator | 18:26:54.613 STDOUT terraform:  } 2025-05-28 18:26:54.613956 | orchestrator | 18:26:54.613 STDOUT terraform:  + binding (known after apply) 2025-05-28 18:26:54.613963 | orchestrator | 18:26:54.613 STDOUT terraform:  + fixed_ip { 2025-05-28 18:26:54.613997 | orchestrator | 18:26:54.613 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-28 18:26:54.614052 | orchestrator | 18:26:54.613 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.614061 | orchestrator | 18:26:54.614 STDOUT terraform:  } 2025-05-28 18:26:54.614066 | orchestrator | 18:26:54.614 STDOUT terraform:  } 2025-05-28 18:26:54.614117 | orchestrator | 18:26:54.614 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-28 18:26:54.614164 | orchestrator | 18:26:54.614 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 18:26:54.614203 | orchestrator | 18:26:54.614 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.614242 | orchestrator | 18:26:54.614 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 18:26:54.614278 | orchestrator | 18:26:54.614 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 18:26:54.614315 | orchestrator | 18:26:54.614 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.614355 | orchestrator | 18:26:54.614 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 18:26:54.614426 | orchestrator | 18:26:54.614 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 18:26:54.614462 | orchestrator | 18:26:54.614 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 18:26:54.614500 | orchestrator | 18:26:54.614 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.614539 | orchestrator | 18:26:54.614 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.614577 | orchestrator | 18:26:54.614 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 18:26:54.614614 | orchestrator | 18:26:54.614 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.614671 | orchestrator | 18:26:54.614 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.614710 | orchestrator | 18:26:54.614 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.614749 | orchestrator | 18:26:54.614 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.614787 | orchestrator | 18:26:54.614 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 18:26:54.614827 | orchestrator | 18:26:54.614 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.614846 | orchestrator | 18:26:54.614 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.614879 | orchestrator | 18:26:54.614 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 18:26:54.614886 | orchestrator | 18:26:54.614 STDOUT terraform:  } 2025-05-28 18:26:54.614914 | orchestrator | 18:26:54.614 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.614945 | orchestrator | 18:26:54.614 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 18:26:54.614957 | orchestrator | 18:26:54.614 STDOUT terraform:  } 2025-05-28 18:26:54.614979 | orchestrator | 18:26:54.614 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.615008 | orchestrator | 18:26:54.614 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 18:26:54.615015 | orchestrator | 18:26:54.615 STDOUT terraform:  } 2025-05-28 18:26:54.615042 | orchestrator | 18:26:54.615 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.615071 | orchestrator | 18:26:54.615 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 18:26:54.615078 | orchestrator | 18:26:54.615 STDOUT terraform:  } 2025-05-28 18:26:54.615109 | orchestrator | 18:26:54.615 STDOUT terraform:  + binding (known after apply) 2025-05-28 18:26:54.615118 | orchestrator | 18:26:54.615 STDOUT terraform:  + fixed_ip { 2025-05-28 18:26:54.615150 | orchestrator | 18:26:54.615 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-28 18:26:54.615182 | orchestrator | 18:26:54.615 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.615189 | orchestrator | 18:26:54.615 STDOUT terraform:  } 2025-05-28 18:26:54.615211 | orchestrator | 18:26:54.615 STDOUT terraform:  } 2025-05-28 18:26:54.615258 | orchestrator | 18:26:54.615 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-28 18:26:54.615303 | orchestrator | 18:26:54.615 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 18:26:54.615341 | orchestrator | 18:26:54.615 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.615392 | orchestrator | 18:26:54.615 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 18:26:54.615429 | orchestrator | 18:26:54.615 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 18:26:54.615472 | orchestrator | 18:26:54.615 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.615504 | orchestrator | 18:26:54.615 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 18:26:54.615541 | orchestrator | 18:26:54.615 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 18:26:54.615580 | orchestrator | 18:26:54.615 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 18:26:54.615615 | orchestrator | 18:26:54.615 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.615664 | orchestrator | 18:26:54.615 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.615701 | orchestrator | 18:26:54.615 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 18:26:54.615740 | orchestrator | 18:26:54.615 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.615778 | orchestrator | 18:26:54.615 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.615816 | orchestrator | 18:26:54.615 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.615854 | orchestrator | 18:26:54.615 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.615892 | orchestrator | 18:26:54.615 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 18:26:54.615926 | orchestrator | 18:26:54.615 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.615950 | orchestrator | 18:26:54.615 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.615980 | orchestrator | 18:26:54.615 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 18:26:54.615987 | orchestrator | 18:26:54.615 STDOUT terraform:  } 2025-05-28 18:26:54.616014 | orchestrator | 18:26:54.615 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.616045 | orchestrator | 18:26:54.616 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 18:26:54.616052 | orchestrator | 18:26:54.616 STDOUT terraform:  } 2025-05-28 18:26:54.616078 | orchestrator | 18:26:54.616 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.616109 | orchestrator | 18:26:54.616 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 18:26:54.616116 | orchestrator | 18:26:54.616 STDOUT terraform:  } 2025-05-28 18:26:54.616143 | orchestrator | 18:26:54.616 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.616174 | orchestrator | 18:26:54.616 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 18:26:54.616180 | orchestrator | 18:26:54.616 STDOUT terraform:  } 2025-05-28 18:26:54.616209 | orchestrator | 18:26:54.616 STDOUT terraform:  + binding (known after apply) 2025-05-28 18:26:54.616216 | orchestrator | 18:26:54.616 STDOUT terraform:  + fixed_ip { 2025-05-28 18:26:54.616246 | orchestrator | 18:26:54.616 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-28 18:26:54.616276 | orchestrator | 18:26:54.616 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.616283 | orchestrator | 18:26:54.616 STDOUT terraform:  } 2025-05-28 18:26:54.616303 | orchestrator | 18:26:54.616 STDOUT terraform:  } 2025-05-28 18:26:54.616370 | orchestrator | 18:26:54.616 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-28 18:26:54.616450 | orchestrator | 18:26:54.616 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-28 18:26:54.616495 | orchestrator | 18:26:54.616 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.616535 | orchestrator | 18:26:54.616 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-28 18:26:54.616571 | orchestrator | 18:26:54.616 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-28 18:26:54.616608 | orchestrator | 18:26:54.616 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.616649 | orchestrator | 18:26:54.616 STDOUT terraform:  + device_id = (known after apply) 2025-05-28 18:26:54.616687 | orchestrator | 18:26:54.616 STDOUT terraform:  + device_owner = (known after apply) 2025-05-28 18:26:54.616725 | orchestrator | 18:26:54.616 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-28 18:26:54.616762 | orchestrator | 18:26:54.616 STDOUT terraform:  + dns_name = (known after apply) 2025-05-28 18:26:54.616801 | orchestrator | 18:26:54.616 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.616838 | orchestrator | 18:26:54.616 STDOUT terraform:  + mac_address = (known after apply) 2025-05-28 18:26:54.616874 | orchestrator | 18:26:54.616 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.616914 | orchestrator | 18:26:54.616 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-28 18:26:54.616952 | orchestrator | 18:26:54.616 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-28 18:26:54.616991 | orchestrator | 18:26:54.616 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.617030 | orchestrator | 18:26:54.616 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-28 18:26:54.617066 | orchestrator | 18:26:54.617 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.617094 | orchestrator | 18:26:54.617 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.617121 | orchestrator | 18:26:54.617 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-28 18:26:54.617128 | orchestrator | 18:26:54.617 STDOUT terraform:  } 2025-05-28 18:26:54.617156 | orchestrator | 18:26:54.617 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.617186 | orchestrator | 18:26:54.617 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-28 18:26:54.617193 | orchestrator | 18:26:54.617 STDOUT terraform:  } 2025-05-28 18:26:54.617220 | orchestrator | 18:26:54.617 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.617251 | orchestrator | 18:26:54.617 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-28 18:26:54.617257 | orchestrator | 18:26:54.617 STDOUT terraform:  } 2025-05-28 18:26:54.617284 | orchestrator | 18:26:54.617 STDOUT terraform:  + allowed_address_pairs { 2025-05-28 18:26:54.617316 | orchestrator | 18:26:54.617 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-28 18:26:54.617323 | orchestrator | 18:26:54.617 STDOUT terraform:  } 2025-05-28 18:26:54.617355 | orchestrator | 18:26:54.617 STDOUT terraform:  + binding (known after apply) 2025-05-28 18:26:54.617363 | orchestrator | 18:26:54.617 STDOUT terraform:  + fixed_ip { 2025-05-28 18:26:54.617427 | orchestrator | 18:26:54.617 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-28 18:26:54.617448 | orchestrator | 18:26:54.617 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.617455 | orchestrator | 18:26:54.617 STDOUT terraform:  } 2025-05-28 18:26:54.617476 | orchestrator | 18:26:54.617 STDOUT terraform:  } 2025-05-28 18:26:54.617525 | orchestrator | 18:26:54.617 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-28 18:26:54.617575 | orchestrator | 18:26:54.617 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-28 18:26:54.617597 | orchestrator | 18:26:54.617 STDOUT terraform:  + force_destroy = false 2025-05-28 18:26:54.617628 | orchestrator | 18:26:54.617 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.617659 | orchestrator | 18:26:54.617 STDOUT terraform:  + port_id = (known after apply) 2025-05-28 18:26:54.617688 | orchestrator | 18:26:54.617 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.617721 | orchestrator | 18:26:54.617 STDOUT terraform:  + router_id = (known after apply) 2025-05-28 18:26:54.617752 | orchestrator | 18:26:54.617 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-28 18:26:54.617758 | orchestrator | 18:26:54.617 STDOUT terraform:  } 2025-05-28 18:26:54.617801 | orchestrator | 18:26:54.617 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-28 18:26:54.617842 | orchestrator | 18:26:54.617 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-28 18:26:54.617881 | orchestrator | 18:26:54.617 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-28 18:26:54.617920 | orchestrator | 18:26:54.617 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.617944 | orchestrator | 18:26:54.617 STDOUT terraform:  + availability_zone_hints = [ 2025-05-28 18:26:54.617951 | orchestrator | 18:26:54.617 STDOUT terraform:  + "nova", 2025-05-28 18:26:54.617971 | orchestrator | 18:26:54.617 STDOUT terraform:  ] 2025-05-28 18:26:54.618011 | orchestrator | 18:26:54.617 STDOUT terraform:  + distributed = (known after apply) 2025-05-28 18:26:54.618069 | orchestrator | 18:26:54.618 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-28 18:26:54.618122 | orchestrator | 18:26:54.618 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-28 18:26:54.618160 | orchestrator | 18:26:54.618 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.618191 | orchestrator | 18:26:54.618 STDOUT terraform:  + name = "testbed" 2025-05-28 18:26:54.618230 | orchestrator | 18:26:54.618 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.618270 | orchestrator | 18:26:54.618 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.618302 | orchestrator | 18:26:54.618 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-28 18:26:54.618308 | orchestrator | 18:26:54.618 STDOUT terraform:  } 2025-05-28 18:26:54.618368 | orchestrator | 18:26:54.618 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-28 18:26:54.618435 | orchestrator | 18:26:54.618 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-28 18:26:54.618456 | orchestrator | 18:26:54.618 STDOUT terraform:  + description = "ssh" 2025-05-28 18:26:54.618480 | orchestrator | 18:26:54.618 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.618505 | orchestrator | 18:26:54.618 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.618540 | orchestrator | 18:26:54.618 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.618561 | orchestrator | 18:26:54.618 STDOUT terraform:  + port_range_max = 22 2025-05-28 18:26:54.618585 | orchestrator | 18:26:54.618 STDOUT terraform:  + port_range_min = 22 2025-05-28 18:26:54.618607 | orchestrator | 18:26:54.618 STDOUT terraform:  + protocol = "tcp" 2025-05-28 18:26:54.618638 | orchestrator | 18:26:54.618 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.618672 | orchestrator | 18:26:54.618 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.618699 | orchestrator | 18:26:54.618 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 18:26:54.618732 | orchestrator | 18:26:54.618 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.618763 | orchestrator | 18:26:54.618 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.618769 | orchestrator | 18:26:54.618 STDOUT terraform:  } 2025-05-28 18:26:54.618829 | orchestrator | 18:26:54.618 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-28 18:26:54.618892 | orchestrator | 18:26:54.618 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-28 18:26:54.618917 | orchestrator | 18:26:54.618 STDOUT terraform:  + description = "wireguard" 2025-05-28 18:26:54.618941 | orchestrator | 18:26:54.618 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.618966 | orchestrator | 18:26:54.618 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.619000 | orchestrator | 18:26:54.618 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.619022 | orchestrator | 18:26:54.618 STDOUT terraform:  + port_range_max = 51820 2025-05-28 18:26:54.619046 | orchestrator | 18:26:54.619 STDOUT terraform:  + port_range_min = 51820 2025-05-28 18:26:54.619055 | orchestrator | 18:26:54.619 STDOUT terraform:  + protocol = "udp" 2025-05-28 18:26:54.619095 | orchestrator | 18:26:54.619 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.619127 | orchestrator | 18:26:54.619 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.619156 | orchestrator | 18:26:54.619 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 18:26:54.619183 | orchestrator | 18:26:54.619 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.619216 | orchestrator | 18:26:54.619 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.619231 | orchestrator | 18:26:54.619 STDOUT terraform:  } 2025-05-28 18:26:54.619282 | orchestrator | 18:26:54.619 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-28 18:26:54.619335 | orchestrator | 18:26:54.619 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-28 18:26:54.619362 | orchestrator | 18:26:54.619 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.619403 | orchestrator | 18:26:54.619 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.619429 | orchestrator | 18:26:54.619 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.619452 | orchestrator | 18:26:54.619 STDOUT terraform:  + protocol = "tcp" 2025-05-28 18:26:54.619482 | orchestrator | 18:26:54.619 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.619515 | orchestrator | 18:26:54.619 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.619547 | orchestrator | 18:26:54.619 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-28 18:26:54.619579 | orchestrator | 18:26:54.619 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.619611 | orchestrator | 18:26:54.619 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.619623 | orchestrator | 18:26:54.619 STDOUT terraform:  } 2025-05-28 18:26:54.619674 | orchestrator | 18:26:54.619 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-28 18:26:54.619730 | orchestrator | 18:26:54.619 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-28 18:26:54.619757 | orchestrator | 18:26:54.619 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.619778 | orchestrator | 18:26:54.619 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.619811 | orchestrator | 18:26:54.619 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.619833 | orchestrator | 18:26:54.619 STDOUT terraform:  + protocol = "udp" 2025-05-28 18:26:54.619865 | orchestrator | 18:26:54.619 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.619898 | orchestrator | 18:26:54.619 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.619929 | orchestrator | 18:26:54.619 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-28 18:26:54.619962 | orchestrator | 18:26:54.619 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.619992 | orchestrator | 18:26:54.619 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.619999 | orchestrator | 18:26:54.619 STDOUT terraform:  } 2025-05-28 18:26:54.620059 | orchestrator | 18:26:54.619 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-28 18:26:54.620115 | orchestrator | 18:26:54.620 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-28 18:26:54.620144 | orchestrator | 18:26:54.620 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.620171 | orchestrator | 18:26:54.620 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.620199 | orchestrator | 18:26:54.620 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.620208 | orchestrator | 18:26:54.620 STDOUT terraform:  + prot 2025-05-28 18:26:54.620267 | orchestrator | 18:26:54.620 STDOUT terraform: ocol = "icmp" 2025-05-28 18:26:54.620301 | orchestrator | 18:26:54.620 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.620331 | orchestrator | 18:26:54.620 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.620357 | orchestrator | 18:26:54.620 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 18:26:54.620415 | orchestrator | 18:26:54.620 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.620445 | orchestrator | 18:26:54.620 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.620452 | orchestrator | 18:26:54.620 STDOUT terraform:  } 2025-05-28 18:26:54.620508 | orchestrator | 18:26:54.620 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-28 18:26:54.623572 | orchestrator | 18:26:54.620 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-28 18:26:54.623599 | orchestrator | 18:26:54.620 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.623606 | orchestrator | 18:26:54.620 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.623618 | orchestrator | 18:26:54.620 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.623627 | orchestrator | 18:26:54.620 STDOUT terraform:  + protocol = "tcp" 2025-05-28 18:26:54.623631 | orchestrator | 18:26:54.620 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.623635 | orchestrator | 18:26:54.620 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.623639 | orchestrator | 18:26:54.620 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 18:26:54.623643 | orchestrator | 18:26:54.620 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.623646 | orchestrator | 18:26:54.620 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.623650 | orchestrator | 18:26:54.620 STDOUT terraform:  } 2025-05-28 18:26:54.623654 | orchestrator | 18:26:54.620 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-28 18:26:54.623658 | orchestrator | 18:26:54.621 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-28 18:26:54.623662 | orchestrator | 18:26:54.621 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.623666 | orchestrator | 18:26:54.621 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.623669 | orchestrator | 18:26:54.621 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.623673 | orchestrator | 18:26:54.621 STDOUT terraform:  + protocol = "udp" 2025-05-28 18:26:54.623677 | orchestrator | 18:26:54.621 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.623681 | orchestrator | 18:26:54.621 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.623684 | orchestrator | 18:26:54.621 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 18:26:54.623688 | orchestrator | 18:26:54.621 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.623692 | orchestrator | 18:26:54.621 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.623695 | orchestrator | 18:26:54.621 STDOUT terraform:  } 2025-05-28 18:26:54.623699 | orchestrator | 18:26:54.621 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-28 18:26:54.623703 | orchestrator | 18:26:54.621 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-28 18:26:54.623707 | orchestrator | 18:26:54.621 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.623710 | orchestrator | 18:26:54.621 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.623714 | orchestrator | 18:26:54.621 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.623718 | orchestrator | 18:26:54.621 STDOUT terraform:  + protocol = "icmp" 2025-05-28 18:26:54.623722 | orchestrator | 18:26:54.621 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.623725 | orchestrator | 18:26:54.621 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.623729 | orchestrator | 18:26:54.621 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 18:26:54.623736 | orchestrator | 18:26:54.621 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.623740 | orchestrator | 18:26:54.621 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.623744 | orchestrator | 18:26:54.621 STDOUT terraform:  } 2025-05-28 18:26:54.623748 | orchestrator | 18:26:54.621 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-28 18:26:54.623759 | orchestrator | 18:26:54.621 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-28 18:26:54.623764 | orchestrator | 18:26:54.621 STDOUT terraform:  + description = "vrrp" 2025-05-28 18:26:54.623768 | orchestrator | 18:26:54.621 STDOUT terraform:  + direction = "ingress" 2025-05-28 18:26:54.623772 | orchestrator | 18:26:54.621 STDOUT terraform:  + ethertype = "IPv4" 2025-05-28 18:26:54.623776 | orchestrator | 18:26:54.621 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.623782 | orchestrator | 18:26:54.621 STDOUT terraform:  + protocol = "112" 2025-05-28 18:26:54.623786 | orchestrator | 18:26:54.621 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.623790 | orchestrator | 18:26:54.621 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-28 18:26:54.623793 | orchestrator | 18:26:54.621 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-28 18:26:54.623797 | orchestrator | 18:26:54.621 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-28 18:26:54.623801 | orchestrator | 18:26:54.621 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.623804 | orchestrator | 18:26:54.621 STDOUT terraform:  } 2025-05-28 18:26:54.623808 | orchestrator | 18:26:54.622 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-28 18:26:54.623812 | orchestrator | 18:26:54.622 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-28 18:26:54.623816 | orchestrator | 18:26:54.622 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.623820 | orchestrator | 18:26:54.622 STDOUT terraform:  + description = "management security group" 2025-05-28 18:26:54.623824 | orchestrator | 18:26:54.622 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.623827 | orchestrator | 18:26:54.622 STDOUT terraform:  + name = "testbed-management" 2025-05-28 18:26:54.623831 | orchestrator | 18:26:54.622 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.623835 | orchestrator | 18:26:54.622 STDOUT terraform:  + stateful = (known after apply) 2025-05-28 18:26:54.623839 | orchestrator | 18:26:54.622 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.623842 | orchestrator | 18:26:54.622 STDOUT terraform:  } 2025-05-28 18:26:54.623846 | orchestrator | 18:26:54.622 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-28 18:26:54.623850 | orchestrator | 18:26:54.622 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-28 18:26:54.623854 | orchestrator | 18:26:54.622 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.623857 | orchestrator | 18:26:54.622 STDOUT terraform:  + description = "node security group" 2025-05-28 18:26:54.623865 | orchestrator | 18:26:54.622 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.623868 | orchestrator | 18:26:54.622 STDOUT terraform:  + name = "testbed-node" 2025-05-28 18:26:54.623872 | orchestrator | 18:26:54.622 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.623876 | orchestrator | 18:26:54.622 STDOUT terraform:  + stateful = (known after apply) 2025-05-28 18:26:54.623879 | orchestrator | 18:26:54.622 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.623883 | orchestrator | 18:26:54.622 STDOUT terraform:  } 2025-05-28 18:26:54.623887 | orchestrator | 18:26:54.622 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-28 18:26:54.623891 | orchestrator | 18:26:54.622 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-28 18:26:54.623894 | orchestrator | 18:26:54.622 STDOUT terraform:  + all_tags = (known after apply) 2025-05-28 18:26:54.623898 | orchestrator | 18:26:54.622 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-28 18:26:54.623902 | orchestrator | 18:26:54.622 STDOUT terraform:  + dns_nameservers = [ 2025-05-28 18:26:54.623906 | orchestrator | 18:26:54.622 STDOUT terraform:  + "8.8.8.8", 2025-05-28 18:26:54.623912 | orchestrator | 18:26:54.622 STDOUT terraform:  + "9.9.9.9", 2025-05-28 18:26:54.623916 | orchestrator | 18:26:54.622 STDOUT terraform:  ] 2025-05-28 18:26:54.623919 | orchestrator | 18:26:54.622 STDOUT terraform:  + enable_dhcp = true 2025-05-28 18:26:54.623923 | orchestrator | 18:26:54.622 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-28 18:26:54.623927 | orchestrator | 18:26:54.622 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.623931 | orchestrator | 18:26:54.622 STDOUT terraform:  + ip_version = 4 2025-05-28 18:26:54.623934 | orchestrator | 18:26:54.622 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-28 18:26:54.623938 | orchestrator | 18:26:54.622 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-28 18:26:54.623942 | orchestrator | 18:26:54.622 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-28 18:26:54.623946 | orchestrator | 18:26:54.622 STDOUT terraform:  + network_id = (known after apply) 2025-05-28 18:26:54.623950 | orchestrator | 18:26:54.623 STDOUT terraform:  + no_gateway = false 2025-05-28 18:26:54.623954 | orchestrator | 18:26:54.623 STDOUT terraform:  + region = (known after apply) 2025-05-28 18:26:54.623957 | orchestrator | 18:26:54.623 STDOUT terraform:  + service_types = (known after apply) 2025-05-28 18:26:54.623961 | orchestrator | 18:26:54.623 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-28 18:26:54.623965 | orchestrator | 18:26:54.623 STDOUT terraform:  + allocation_pool { 2025-05-28 18:26:54.623969 | orchestrator | 18:26:54.623 STDOUT terraform:  + end = "192.168.31.250" 2025-05-28 18:26:54.623973 | orchestrator | 18:26:54.623 STDOUT terraform:  + start = "192.168.31.200" 2025-05-28 18:26:54.623976 | orchestrator | 18:26:54.623 STDOUT terraform:  } 2025-05-28 18:26:54.623980 | orchestrator | 18:26:54.623 STDOUT terraform:  } 2025-05-28 18:26:54.623986 | orchestrator | 18:26:54.623 STDOUT terraform:  # terraform_data.image will be created 2025-05-28 18:26:54.624018 | orchestrator | 18:26:54.623 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-28 18:26:54.624022 | orchestrator | 18:26:54.623 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.624026 | orchestrator | 18:26:54.623 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-28 18:26:54.624030 | orchestrator | 18:26:54.623 STDOUT terraform:  + output = (known after apply) 2025-05-28 18:26:54.624034 | orchestrator | 18:26:54.623 STDOUT terraform:  } 2025-05-28 18:26:54.624038 | orchestrator | 18:26:54.623 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-28 18:26:54.624041 | orchestrator | 18:26:54.623 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-28 18:26:54.624045 | orchestrator | 18:26:54.623 STDOUT terraform:  + id = (known after apply) 2025-05-28 18:26:54.624049 | orchestrator | 18:26:54.623 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-28 18:26:54.624053 | orchestrator | 18:26:54.623 STDOUT terraform:  + output = (known after apply) 2025-05-28 18:26:54.624057 | orchestrator | 18:26:54.623 STDOUT terraform:  } 2025-05-28 18:26:54.624061 | orchestrator | 18:26:54.623 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-28 18:26:54.624065 | orchestrator | 18:26:54.623 STDOUT terraform: Changes to Outputs: 2025-05-28 18:26:54.624068 | orchestrator | 18:26:54.623 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-28 18:26:54.624072 | orchestrator | 18:26:54.623 STDOUT terraform:  + private_key = (sensitive value) 2025-05-28 18:26:54.848251 | orchestrator | 18:26:54.847 STDOUT terraform: terraform_data.image: Creating... 2025-05-28 18:26:54.849134 | orchestrator | 18:26:54.848 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-28 18:26:54.849604 | orchestrator | 18:26:54.849 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=7a609998-7758-e22a-44f2-bb5518a695e4] 2025-05-28 18:26:54.850441 | orchestrator | 18:26:54.850 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=2652157b-3792-ee74-a699-6610e642d526] 2025-05-28 18:26:54.855041 | orchestrator | 18:26:54.854 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-28 18:26:54.856474 | orchestrator | 18:26:54.856 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-28 18:26:54.865799 | orchestrator | 18:26:54.865 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-28 18:26:54.875114 | orchestrator | 18:26:54.875 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-28 18:26:54.875468 | orchestrator | 18:26:54.875 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-28 18:26:54.875987 | orchestrator | 18:26:54.875 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-28 18:26:54.876261 | orchestrator | 18:26:54.876 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-28 18:26:54.876722 | orchestrator | 18:26:54.876 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-28 18:26:54.885320 | orchestrator | 18:26:54.885 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-28 18:26:54.887869 | orchestrator | 18:26:54.887 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-28 18:26:55.369554 | orchestrator | 18:26:55.369 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-28 18:26:55.371948 | orchestrator | 18:26:55.371 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-28 18:26:55.379183 | orchestrator | 18:26:55.379 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-28 18:26:55.383115 | orchestrator | 18:26:55.382 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-28 18:26:55.385728 | orchestrator | 18:26:55.385 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-28 18:26:55.391605 | orchestrator | 18:26:55.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-28 18:27:00.877622 | orchestrator | 18:27:00.877 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=f7d76636-2f4e-4b0b-be62-d28b5deb0aab] 2025-05-28 18:27:00.901461 | orchestrator | 18:27:00.901 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-28 18:27:00.910932 | orchestrator | 18:27:00.910 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=61e4234f60179db78995ecfcf239a19609d0bf8b] 2025-05-28 18:27:00.924621 | orchestrator | 18:27:00.924 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-28 18:27:00.931791 | orchestrator | 18:27:00.931 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=34ceb740514bebc1bcc2b8c3f4b8b1ae02342dcf] 2025-05-28 18:27:00.942079 | orchestrator | 18:27:00.941 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-28 18:27:04.877531 | orchestrator | 18:27:04.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-28 18:27:04.878779 | orchestrator | 18:27:04.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-28 18:27:04.878819 | orchestrator | 18:27:04.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-28 18:27:04.878832 | orchestrator | 18:27:04.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-28 18:27:04.886970 | orchestrator | 18:27:04.886 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-28 18:27:04.889009 | orchestrator | 18:27:04.888 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-28 18:27:05.381271 | orchestrator | 18:27:05.380 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-28 18:27:05.384179 | orchestrator | 18:27:05.383 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-28 18:27:05.394306 | orchestrator | 18:27:05.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-28 18:27:05.424520 | orchestrator | 18:27:05.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=30074f97-ca08-4933-8c1f-7f138584444d] 2025-05-28 18:27:05.445083 | orchestrator | 18:27:05.444 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-28 18:27:05.445434 | orchestrator | 18:27:05.445 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=6fe61b53-6367-46c0-9f1e-24f42cf64445] 2025-05-28 18:27:05.452626 | orchestrator | 18:27:05.452 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-28 18:27:05.469808 | orchestrator | 18:27:05.469 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=1334c062-0c98-48ca-b2e9-c7f7d80524d4] 2025-05-28 18:27:05.474543 | orchestrator | 18:27:05.474 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-28 18:27:05.476472 | orchestrator | 18:27:05.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=384074e9-09a1-4592-86bd-93fc7dbc72b1] 2025-05-28 18:27:05.481904 | orchestrator | 18:27:05.481 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-28 18:27:05.493350 | orchestrator | 18:27:05.492 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=3485bbb9-dc34-4923-9640-15ed9830c3cd] 2025-05-28 18:27:05.501507 | orchestrator | 18:27:05.501 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-28 18:27:05.564116 | orchestrator | 18:27:05.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=0c0aa11d-14fc-40a7-bbcb-a7c7d902b836] 2025-05-28 18:27:05.576157 | orchestrator | 18:27:05.575 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-28 18:27:05.579546 | orchestrator | 18:27:05.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=669b4378-b931-4094-a90b-e4d774be1d1d] 2025-05-28 18:27:05.583469 | orchestrator | 18:27:05.583 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=49a2ee15-28bf-4b5f-b85e-3182eb91d801] 2025-05-28 18:27:05.588219 | orchestrator | 18:27:05.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-28 18:27:05.599914 | orchestrator | 18:27:05.599 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=1e78336b-5c45-4f72-b22f-cac6621703c1] 2025-05-28 18:27:10.943775 | orchestrator | 18:27:10.943 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-28 18:27:11.239050 | orchestrator | 18:27:11.238 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=96b987ed-9174-4ed7-a9c0-ebab61d1f056] 2025-05-28 18:27:11.448882 | orchestrator | 18:27:11.448 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=bb93986a-09a9-49f5-a8f9-b63fe9ff9266] 2025-05-28 18:27:11.457778 | orchestrator | 18:27:11.457 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-28 18:27:15.447243 | orchestrator | 18:27:15.446 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-28 18:27:15.454280 | orchestrator | 18:27:15.454 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-28 18:27:15.475719 | orchestrator | 18:27:15.475 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-28 18:27:15.483066 | orchestrator | 18:27:15.482 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-28 18:27:15.502421 | orchestrator | 18:27:15.502 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-28 18:27:15.589318 | orchestrator | 18:27:15.588 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-28 18:27:15.835652 | orchestrator | 18:27:15.835 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=c51b3b6a-c51a-4126-90e5-19ce13e2ffec] 2025-05-28 18:27:15.847503 | orchestrator | 18:27:15.846 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=c7a9cf88-9364-41b8-88ad-d6642f89e1c7] 2025-05-28 18:27:15.866839 | orchestrator | 18:27:15.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=3f33ab16-b639-440d-ac1b-a4a99753b81e] 2025-05-28 18:27:15.867417 | orchestrator | 18:27:15.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=7e8a159a-a40f-415e-ab81-88c2679d1e87] 2025-05-28 18:27:15.901767 | orchestrator | 18:27:15.901 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=ccd95cfc-fef5-427d-aa82-20b0aa69c6ea] 2025-05-28 18:27:15.939074 | orchestrator | 18:27:15.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=4fcdec5e-7c62-42ab-b54c-67d461c9b6b3] 2025-05-28 18:27:19.105957 | orchestrator | 18:27:19.105 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=41c298a9-432b-4e41-a2cf-36f0cc90e43b] 2025-05-28 18:27:19.112952 | orchestrator | 18:27:19.112 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-28 18:27:19.113075 | orchestrator | 18:27:19.112 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-28 18:27:19.113172 | orchestrator | 18:27:19.113 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-28 18:27:19.331505 | orchestrator | 18:27:19.331 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=4887333d-59d8-4383-a523-f638cc94e768] 2025-05-28 18:27:19.354820 | orchestrator | 18:27:19.351 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-28 18:27:19.355972 | orchestrator | 18:27:19.355 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-28 18:27:19.357747 | orchestrator | 18:27:19.357 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-28 18:27:19.358763 | orchestrator | 18:27:19.358 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-28 18:27:19.362822 | orchestrator | 18:27:19.362 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=1bd8accf-44e1-4121-9886-ab4194c6d676] 2025-05-28 18:27:19.365778 | orchestrator | 18:27:19.365 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-28 18:27:19.366378 | orchestrator | 18:27:19.366 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-28 18:27:19.367737 | orchestrator | 18:27:19.367 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-28 18:27:19.368516 | orchestrator | 18:27:19.368 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-28 18:27:19.380471 | orchestrator | 18:27:19.380 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-28 18:27:20.041240 | orchestrator | 18:27:20.040 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=06afe1f2-a8d5-40fc-86cd-c180d51c5700] 2025-05-28 18:27:20.051255 | orchestrator | 18:27:20.051 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-28 18:27:20.251846 | orchestrator | 18:27:20.251 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6a7300c5-bbb5-4352-ab1b-60cbbc1224fc] 2025-05-28 18:27:20.259577 | orchestrator | 18:27:20.259 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-28 18:27:20.393489 | orchestrator | 18:27:20.393 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=be6b84f1-3bf8-4085-8dd9-5f5a07df2dea] 2025-05-28 18:27:20.407024 | orchestrator | 18:27:20.406 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=0aa05597-e7cb-412d-ab61-df31187d2171] 2025-05-28 18:27:20.408362 | orchestrator | 18:27:20.408 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-28 18:27:20.413339 | orchestrator | 18:27:20.413 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-28 18:27:20.552958 | orchestrator | 18:27:20.552 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=7211c1cc-2b63-49e0-bad8-391e93741255] 2025-05-28 18:27:20.562724 | orchestrator | 18:27:20.562 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-28 18:27:20.572095 | orchestrator | 18:27:20.571 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f80b1cc3-3ced-47f9-8c57-1d3cc3c1896b] 2025-05-28 18:27:20.576994 | orchestrator | 18:27:20.576 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-28 18:27:20.723734 | orchestrator | 18:27:20.723 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=487e6fa5-9a50-427d-92f4-e60c18495163] 2025-05-28 18:27:20.734315 | orchestrator | 18:27:20.734 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-28 18:27:20.862213 | orchestrator | 18:27:20.861 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=2b448134-feec-4d3d-842e-6bec812b30b2] 2025-05-28 18:27:20.994730 | orchestrator | 18:27:20.994 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=1715b9fb-98d0-42ec-9a0f-51ceff1ac602] 2025-05-28 18:27:24.993348 | orchestrator | 18:27:24.992 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=745a3022-8af2-4872-a9b3-a4f9c7941da3] 2025-05-28 18:27:25.020380 | orchestrator | 18:27:25.020 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=2e4c75ef-e582-46e8-b1fb-6b9dd1082c1c] 2025-05-28 18:27:25.048782 | orchestrator | 18:27:25.048 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=2c35ac1b-9cbf-44e7-ba9f-8bea165ab058] 2025-05-28 18:27:25.203768 | orchestrator | 18:27:25.203 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=fa02a324-d7da-49f4-9aba-f6f40c1cb2e3] 2025-05-28 18:27:25.205255 | orchestrator | 18:27:25.205 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=335baeb9-c21a-491a-b0f0-744cc4458d5d] 2025-05-28 18:27:25.457064 | orchestrator | 18:27:25.456 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=0a697678-bc3f-4835-87b5-b26a5639ef7e] 2025-05-28 18:27:25.856822 | orchestrator | 18:27:25.856 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=1663aaf8-b01a-4a8b-929d-23d3d05f10c3] 2025-05-28 18:27:26.624338 | orchestrator | 18:27:26.618 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=8064adc0-dd71-4087-b6a6-a420a67a7dfd] 2025-05-28 18:27:26.650686 | orchestrator | 18:27:26.650 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-28 18:27:26.654841 | orchestrator | 18:27:26.654 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-28 18:27:26.659674 | orchestrator | 18:27:26.659 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-28 18:27:26.660877 | orchestrator | 18:27:26.660 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-28 18:27:26.664739 | orchestrator | 18:27:26.664 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-28 18:27:26.670567 | orchestrator | 18:27:26.670 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-28 18:27:26.679701 | orchestrator | 18:27:26.679 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-28 18:27:33.519812 | orchestrator | 18:27:33.519 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=7bc0b026-d46e-4922-8f0a-af25e8ad8e13] 2025-05-28 18:27:33.529142 | orchestrator | 18:27:33.528 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-28 18:27:33.538099 | orchestrator | 18:27:33.537 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-28 18:27:33.539602 | orchestrator | 18:27:33.539 STDOUT terraform: local_file.inventory: Creating... 2025-05-28 18:27:33.542588 | orchestrator | 18:27:33.542 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=772975a64040ae083d18c56cbd236ffc0f965d51] 2025-05-28 18:27:33.549381 | orchestrator | 18:27:33.549 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=40e7bb5de6686a7ae4114499d0fd9020b2918068] 2025-05-28 18:27:34.347541 | orchestrator | 18:27:34.347 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=7bc0b026-d46e-4922-8f0a-af25e8ad8e13] 2025-05-28 18:27:36.654338 | orchestrator | 18:27:36.653 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-28 18:27:36.661644 | orchestrator | 18:27:36.661 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-28 18:27:36.663797 | orchestrator | 18:27:36.663 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-28 18:27:36.671256 | orchestrator | 18:27:36.670 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-28 18:27:36.671371 | orchestrator | 18:27:36.671 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-28 18:27:36.681751 | orchestrator | 18:27:36.681 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-28 18:27:46.657585 | orchestrator | 18:27:46.657 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-28 18:27:46.662604 | orchestrator | 18:27:46.662 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-28 18:27:46.664816 | orchestrator | 18:27:46.664 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-28 18:27:46.671999 | orchestrator | 18:27:46.671 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-28 18:27:46.672086 | orchestrator | 18:27:46.671 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-28 18:27:46.682358 | orchestrator | 18:27:46.682 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-28 18:27:47.147572 | orchestrator | 18:27:47.147 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=d325687a-2006-4cad-9181-cc3ea85978d5] 2025-05-28 18:27:47.250452 | orchestrator | 18:27:47.250 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=e452ab41-59ec-4bcb-afeb-274459d51a65] 2025-05-28 18:27:47.253924 | orchestrator | 18:27:47.253 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=81f34a87-d075-413a-93b3-b70de00c79fc] 2025-05-28 18:27:47.287559 | orchestrator | 18:27:47.287 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=1792456c-b243-4266-a152-264c58855ec4] 2025-05-28 18:27:47.314530 | orchestrator | 18:27:47.314 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=924752d2-d635-475e-a2d5-6d30ecd3defd] 2025-05-28 18:27:56.673468 | orchestrator | 18:27:56.673 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-28 18:27:57.318875 | orchestrator | 18:27:57.318 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=78afe64f-bdec-48bc-af1d-3b709a598fb0] 2025-05-28 18:27:57.334371 | orchestrator | 18:27:57.334 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-28 18:27:57.345760 | orchestrator | 18:27:57.345 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-28 18:27:57.348472 | orchestrator | 18:27:57.348 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-28 18:27:57.349446 | orchestrator | 18:27:57.349 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-28 18:27:57.358305 | orchestrator | 18:27:57.358 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3137482401201001006] 2025-05-28 18:27:57.363897 | orchestrator | 18:27:57.359 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-28 18:27:57.363958 | orchestrator | 18:27:57.360 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-28 18:27:57.367138 | orchestrator | 18:27:57.367 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-28 18:27:57.390175 | orchestrator | 18:27:57.388 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-28 18:27:57.396134 | orchestrator | 18:27:57.395 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-28 18:27:57.397907 | orchestrator | 18:27:57.397 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-28 18:27:57.397985 | orchestrator | 18:27:57.397 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-28 18:28:02.674194 | orchestrator | 18:28:02.673 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=1792456c-b243-4266-a152-264c58855ec4/3485bbb9-dc34-4923-9640-15ed9830c3cd] 2025-05-28 18:28:02.700999 | orchestrator | 18:28:02.700 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=78afe64f-bdec-48bc-af1d-3b709a598fb0/384074e9-09a1-4592-86bd-93fc7dbc72b1] 2025-05-28 18:28:02.709832 | orchestrator | 18:28:02.709 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=1792456c-b243-4266-a152-264c58855ec4/6fe61b53-6367-46c0-9f1e-24f42cf64445] 2025-05-28 18:28:02.729067 | orchestrator | 18:28:02.728 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=e452ab41-59ec-4bcb-afeb-274459d51a65/30074f97-ca08-4933-8c1f-7f138584444d] 2025-05-28 18:28:02.737467 | orchestrator | 18:28:02.737 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=78afe64f-bdec-48bc-af1d-3b709a598fb0/1334c062-0c98-48ca-b2e9-c7f7d80524d4] 2025-05-28 18:28:02.763226 | orchestrator | 18:28:02.762 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=1792456c-b243-4266-a152-264c58855ec4/0c0aa11d-14fc-40a7-bbcb-a7c7d902b836] 2025-05-28 18:28:02.765004 | orchestrator | 18:28:02.764 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=e452ab41-59ec-4bcb-afeb-274459d51a65/669b4378-b931-4094-a90b-e4d774be1d1d] 2025-05-28 18:28:02.795285 | orchestrator | 18:28:02.794 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=e452ab41-59ec-4bcb-afeb-274459d51a65/1e78336b-5c45-4f72-b22f-cac6621703c1] 2025-05-28 18:28:02.812577 | orchestrator | 18:28:02.812 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=78afe64f-bdec-48bc-af1d-3b709a598fb0/49a2ee15-28bf-4b5f-b85e-3182eb91d801] 2025-05-28 18:28:07.398912 | orchestrator | 18:28:07.398 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-28 18:28:17.399858 | orchestrator | 18:28:17.399 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-28 18:28:18.167317 | orchestrator | 18:28:18.166 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=4ca89dd6-5cc0-44dd-9cff-af6aaf821bde] 2025-05-28 18:28:18.195797 | orchestrator | 18:28:18.195 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-28 18:28:18.195896 | orchestrator | 18:28:18.195 STDOUT terraform: Outputs: 2025-05-28 18:28:18.195929 | orchestrator | 18:28:18.195 STDOUT terraform: manager_address = 2025-05-28 18:28:18.195943 | orchestrator | 18:28:18.195 STDOUT terraform: private_key = 2025-05-28 18:28:18.319293 | orchestrator | ok: Runtime: 0:01:34.514696 2025-05-28 18:28:18.368091 | 2025-05-28 18:28:18.368308 | TASK [Fetch manager address] 2025-05-28 18:28:18.821364 | orchestrator | ok 2025-05-28 18:28:18.832105 | 2025-05-28 18:28:18.832243 | TASK [Set manager_host address] 2025-05-28 18:28:18.912907 | orchestrator | ok 2025-05-28 18:28:18.922524 | 2025-05-28 18:28:18.922650 | LOOP [Update ansible collections] 2025-05-28 18:28:19.839568 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-28 18:28:19.840031 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 18:28:19.840096 | orchestrator | Starting galaxy collection install process 2025-05-28 18:28:19.840190 | orchestrator | Process install dependency map 2025-05-28 18:28:19.840231 | orchestrator | Starting collection install process 2025-05-28 18:28:19.840266 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-05-28 18:28:19.840308 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-05-28 18:28:19.840349 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-28 18:28:19.840433 | orchestrator | ok: Item: commons Runtime: 0:00:00.578922 2025-05-28 18:28:20.718783 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 18:28:20.719036 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-28 18:28:20.719094 | orchestrator | Starting galaxy collection install process 2025-05-28 18:28:20.719136 | orchestrator | Process install dependency map 2025-05-28 18:28:20.719173 | orchestrator | Starting collection install process 2025-05-28 18:28:20.719208 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-05-28 18:28:20.719244 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-05-28 18:28:20.719279 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-28 18:28:20.719331 | orchestrator | ok: Item: services Runtime: 0:00:00.613177 2025-05-28 18:28:20.738181 | 2025-05-28 18:28:20.738335 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-28 18:28:31.294296 | orchestrator | ok 2025-05-28 18:28:31.302025 | 2025-05-28 18:28:31.302133 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-28 18:29:31.348212 | orchestrator | ok 2025-05-28 18:29:31.359538 | 2025-05-28 18:29:31.359724 | TASK [Fetch manager ssh hostkey] 2025-05-28 18:29:32.941756 | orchestrator | Output suppressed because no_log was given 2025-05-28 18:29:32.956237 | 2025-05-28 18:29:32.956434 | TASK [Get ssh keypair from terraform environment] 2025-05-28 18:29:33.497385 | orchestrator | ok: Runtime: 0:00:00.011519 2025-05-28 18:29:33.512776 | 2025-05-28 18:29:33.512945 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-28 18:29:33.562447 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-28 18:29:33.572876 | 2025-05-28 18:29:33.573000 | TASK [Run manager part 0] 2025-05-28 18:29:34.476174 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 18:29:34.523594 | orchestrator | 2025-05-28 18:29:34.523690 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-28 18:29:34.523701 | orchestrator | 2025-05-28 18:29:34.523724 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-28 18:29:36.702803 | orchestrator | ok: [testbed-manager] 2025-05-28 18:29:36.702909 | orchestrator | 2025-05-28 18:29:36.702941 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-28 18:29:36.702952 | orchestrator | 2025-05-28 18:29:36.702962 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:29:38.764016 | orchestrator | ok: [testbed-manager] 2025-05-28 18:29:38.764107 | orchestrator | 2025-05-28 18:29:38.764117 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-28 18:29:39.456509 | orchestrator | ok: [testbed-manager] 2025-05-28 18:29:39.456631 | orchestrator | 2025-05-28 18:29:39.456647 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-28 18:29:39.509226 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:29:39.509333 | orchestrator | 2025-05-28 18:29:39.509353 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-28 18:29:39.535008 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:29:39.535088 | orchestrator | 2025-05-28 18:29:39.535098 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-28 18:29:39.583253 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:29:39.583326 | orchestrator | 2025-05-28 18:29:39.583333 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-28 18:29:39.621022 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:29:39.621089 | orchestrator | 2025-05-28 18:29:39.621094 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-28 18:29:39.653872 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:29:39.653983 | orchestrator | 2025-05-28 18:29:39.654003 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-28 18:29:39.686431 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:29:39.686484 | orchestrator | 2025-05-28 18:29:39.686493 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-28 18:29:39.719370 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:29:39.719427 | orchestrator | 2025-05-28 18:29:39.719435 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-28 18:29:40.585942 | orchestrator | changed: [testbed-manager] 2025-05-28 18:29:40.586028 | orchestrator | 2025-05-28 18:29:40.586036 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-28 18:32:43.256530 | orchestrator | changed: [testbed-manager] 2025-05-28 18:32:43.256611 | orchestrator | 2025-05-28 18:32:43.256628 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-28 18:34:13.287953 | orchestrator | changed: [testbed-manager] 2025-05-28 18:34:13.288056 | orchestrator | 2025-05-28 18:34:13.288072 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-28 18:34:33.172485 | orchestrator | changed: [testbed-manager] 2025-05-28 18:34:33.172562 | orchestrator | 2025-05-28 18:34:33.172572 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-28 18:34:41.910981 | orchestrator | changed: [testbed-manager] 2025-05-28 18:34:41.911071 | orchestrator | 2025-05-28 18:34:41.911088 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-28 18:34:41.961603 | orchestrator | ok: [testbed-manager] 2025-05-28 18:34:41.961667 | orchestrator | 2025-05-28 18:34:41.961677 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-28 18:34:42.762255 | orchestrator | ok: [testbed-manager] 2025-05-28 18:34:42.762448 | orchestrator | 2025-05-28 18:34:42.762496 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-28 18:34:43.524684 | orchestrator | changed: [testbed-manager] 2025-05-28 18:34:43.524795 | orchestrator | 2025-05-28 18:34:43.524811 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-28 18:34:49.916652 | orchestrator | changed: [testbed-manager] 2025-05-28 18:34:49.916756 | orchestrator | 2025-05-28 18:34:49.916797 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-28 18:34:56.020663 | orchestrator | changed: [testbed-manager] 2025-05-28 18:34:56.020767 | orchestrator | 2025-05-28 18:34:56.020784 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-28 18:34:58.692305 | orchestrator | changed: [testbed-manager] 2025-05-28 18:34:58.692400 | orchestrator | 2025-05-28 18:34:58.692417 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-28 18:35:00.562546 | orchestrator | changed: [testbed-manager] 2025-05-28 18:35:00.562769 | orchestrator | 2025-05-28 18:35:00.562792 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-28 18:35:01.739700 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-28 18:35:01.739821 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-28 18:35:01.739837 | orchestrator | 2025-05-28 18:35:01.739850 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-28 18:35:01.786187 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-28 18:35:01.786274 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-28 18:35:01.786289 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-28 18:35:01.786301 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-28 18:35:04.848148 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-28 18:35:04.848231 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-28 18:35:04.848246 | orchestrator | 2025-05-28 18:35:04.848259 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-28 18:35:05.443438 | orchestrator | changed: [testbed-manager] 2025-05-28 18:35:05.443507 | orchestrator | 2025-05-28 18:35:05.443516 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-28 18:38:26.686000 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-28 18:38:26.686154 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-28 18:38:26.686176 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-28 18:38:26.686189 | orchestrator | 2025-05-28 18:38:26.686202 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-28 18:38:28.997368 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-28 18:38:28.997494 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-28 18:38:28.997510 | orchestrator | 2025-05-28 18:38:28.997523 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-28 18:38:28.997536 | orchestrator | 2025-05-28 18:38:28.997547 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:38:30.388512 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:30.388567 | orchestrator | 2025-05-28 18:38:30.388580 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-28 18:38:30.437568 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:30.437616 | orchestrator | 2025-05-28 18:38:30.437625 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-28 18:38:30.541175 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:30.541213 | orchestrator | 2025-05-28 18:38:30.541219 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-28 18:38:31.289791 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:31.289883 | orchestrator | 2025-05-28 18:38:31.289898 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-28 18:38:32.023278 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:32.023375 | orchestrator | 2025-05-28 18:38:32.023392 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-28 18:38:33.457855 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-28 18:38:33.457900 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-28 18:38:33.457909 | orchestrator | 2025-05-28 18:38:33.457933 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-28 18:38:34.837622 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:34.837735 | orchestrator | 2025-05-28 18:38:34.837751 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-28 18:38:36.585380 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 18:38:36.585461 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-28 18:38:36.585473 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-28 18:38:36.585483 | orchestrator | 2025-05-28 18:38:36.585492 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-28 18:38:37.138048 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:37.138136 | orchestrator | 2025-05-28 18:38:37.138168 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-28 18:38:37.209221 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:38:37.209311 | orchestrator | 2025-05-28 18:38:37.209328 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-28 18:38:38.076861 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:38:38.076965 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:38.076992 | orchestrator | 2025-05-28 18:38:38.077013 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-28 18:38:38.117113 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:38:38.117187 | orchestrator | 2025-05-28 18:38:38.117202 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-28 18:38:38.153154 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:38:38.153227 | orchestrator | 2025-05-28 18:38:38.153240 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-28 18:38:38.193368 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:38:38.193460 | orchestrator | 2025-05-28 18:38:38.193476 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-28 18:38:38.244217 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:38:38.244284 | orchestrator | 2025-05-28 18:38:38.244299 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-28 18:38:38.945903 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:38.945938 | orchestrator | 2025-05-28 18:38:38.945944 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-28 18:38:38.945949 | orchestrator | 2025-05-28 18:38:38.945955 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:38:40.312365 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:40.312508 | orchestrator | 2025-05-28 18:38:40.312527 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-28 18:38:41.316909 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:41.316964 | orchestrator | 2025-05-28 18:38:41.316971 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:38:41.316979 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-28 18:38:41.316984 | orchestrator | 2025-05-28 18:38:41.497571 | orchestrator | ok: Runtime: 0:09:07.520079 2025-05-28 18:38:41.512065 | 2025-05-28 18:38:41.512214 | TASK [Point out that the log in on the manager is now possible] 2025-05-28 18:38:41.556776 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-28 18:38:41.567694 | 2025-05-28 18:38:41.567823 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-28 18:38:41.617182 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-28 18:38:41.628051 | 2025-05-28 18:38:41.628255 | TASK [Run manager part 1 + 2] 2025-05-28 18:38:42.469540 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-28 18:38:42.524103 | orchestrator | 2025-05-28 18:38:42.524154 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-28 18:38:42.524162 | orchestrator | 2025-05-28 18:38:42.524175 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:38:45.566829 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:45.566974 | orchestrator | 2025-05-28 18:38:45.567027 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-28 18:38:45.604706 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:38:45.604753 | orchestrator | 2025-05-28 18:38:45.604762 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-28 18:38:45.649183 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:45.649234 | orchestrator | 2025-05-28 18:38:45.649246 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-28 18:38:45.692625 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:45.692673 | orchestrator | 2025-05-28 18:38:45.692681 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-28 18:38:45.754555 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:45.754593 | orchestrator | 2025-05-28 18:38:45.754600 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-28 18:38:45.808478 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:45.808557 | orchestrator | 2025-05-28 18:38:45.808574 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-28 18:38:45.862602 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-28 18:38:45.862675 | orchestrator | 2025-05-28 18:38:45.862690 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-28 18:38:46.559377 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:46.559660 | orchestrator | 2025-05-28 18:38:46.559687 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-28 18:38:46.604609 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:38:46.604651 | orchestrator | 2025-05-28 18:38:46.604658 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-28 18:38:47.908925 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:47.909006 | orchestrator | 2025-05-28 18:38:47.909025 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-28 18:38:48.474356 | orchestrator | ok: [testbed-manager] 2025-05-28 18:38:48.474454 | orchestrator | 2025-05-28 18:38:48.474472 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-28 18:38:49.641368 | orchestrator | changed: [testbed-manager] 2025-05-28 18:38:49.641466 | orchestrator | 2025-05-28 18:38:49.641483 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-28 18:39:02.999625 | orchestrator | changed: [testbed-manager] 2025-05-28 18:39:02.999691 | orchestrator | 2025-05-28 18:39:02.999700 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-28 18:39:03.694411 | orchestrator | ok: [testbed-manager] 2025-05-28 18:39:03.694530 | orchestrator | 2025-05-28 18:39:03.694547 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-28 18:39:03.761998 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:39:03.762092 | orchestrator | 2025-05-28 18:39:03.762103 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-28 18:39:04.726873 | orchestrator | changed: [testbed-manager] 2025-05-28 18:39:04.726931 | orchestrator | 2025-05-28 18:39:04.726944 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-28 18:39:05.678832 | orchestrator | changed: [testbed-manager] 2025-05-28 18:39:05.678929 | orchestrator | 2025-05-28 18:39:05.678945 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-28 18:39:06.234356 | orchestrator | changed: [testbed-manager] 2025-05-28 18:39:06.234446 | orchestrator | 2025-05-28 18:39:06.234462 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-28 18:39:06.275338 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-28 18:39:06.275473 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-28 18:39:06.275492 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-28 18:39:06.275504 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-28 18:39:08.122552 | orchestrator | changed: [testbed-manager] 2025-05-28 18:39:08.122599 | orchestrator | 2025-05-28 18:39:08.122607 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-28 18:39:17.221368 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-28 18:39:17.221515 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-28 18:39:17.221538 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-28 18:39:17.221551 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-28 18:39:17.221570 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-28 18:39:17.221582 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-28 18:39:17.221593 | orchestrator | 2025-05-28 18:39:17.221605 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-28 18:39:18.269915 | orchestrator | changed: [testbed-manager] 2025-05-28 18:39:18.270101 | orchestrator | 2025-05-28 18:39:18.270127 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-28 18:39:18.312929 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:39:18.313016 | orchestrator | 2025-05-28 18:39:18.313031 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-28 18:39:21.217395 | orchestrator | changed: [testbed-manager] 2025-05-28 18:39:21.217528 | orchestrator | 2025-05-28 18:39:21.217547 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-28 18:39:21.261084 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:39:21.261148 | orchestrator | 2025-05-28 18:39:21.261156 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-28 18:40:55.391792 | orchestrator | changed: [testbed-manager] 2025-05-28 18:40:55.391989 | orchestrator | 2025-05-28 18:40:55.392011 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-28 18:40:56.479318 | orchestrator | ok: [testbed-manager] 2025-05-28 18:40:56.479372 | orchestrator | 2025-05-28 18:40:56.479380 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:40:56.479388 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-28 18:40:56.479394 | orchestrator | 2025-05-28 18:40:56.755697 | orchestrator | ok: Runtime: 0:02:14.645452 2025-05-28 18:40:56.773543 | 2025-05-28 18:40:56.773699 | TASK [Reboot manager] 2025-05-28 18:40:58.311895 | orchestrator | ok: Runtime: 0:00:00.955922 2025-05-28 18:40:58.326991 | 2025-05-28 18:40:58.327194 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-28 18:41:12.538692 | orchestrator | ok 2025-05-28 18:41:12.553330 | 2025-05-28 18:41:12.553484 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-28 18:42:12.608294 | orchestrator | ok 2025-05-28 18:42:12.617089 | 2025-05-28 18:42:12.617218 | TASK [Deploy manager + bootstrap nodes] 2025-05-28 18:42:15.163188 | orchestrator | 2025-05-28 18:42:15.163413 | orchestrator | # DEPLOY MANAGER 2025-05-28 18:42:15.163467 | orchestrator | 2025-05-28 18:42:15.163481 | orchestrator | + set -e 2025-05-28 18:42:15.163493 | orchestrator | + echo 2025-05-28 18:42:15.163506 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-28 18:42:15.163522 | orchestrator | + echo 2025-05-28 18:42:15.163571 | orchestrator | + cat /opt/manager-vars.sh 2025-05-28 18:42:15.166694 | orchestrator | export NUMBER_OF_NODES=6 2025-05-28 18:42:15.166717 | orchestrator | 2025-05-28 18:42:15.166730 | orchestrator | export CEPH_VERSION=reef 2025-05-28 18:42:15.166742 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-28 18:42:15.166753 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-28 18:42:15.166773 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-28 18:42:15.166784 | orchestrator | 2025-05-28 18:42:15.166800 | orchestrator | export ARA=false 2025-05-28 18:42:15.166811 | orchestrator | export TEMPEST=false 2025-05-28 18:42:15.166827 | orchestrator | export IS_ZUUL=true 2025-05-28 18:42:15.166837 | orchestrator | 2025-05-28 18:42:15.166854 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.158 2025-05-28 18:42:15.166865 | orchestrator | export EXTERNAL_API=false 2025-05-28 18:42:15.166875 | orchestrator | 2025-05-28 18:42:15.166895 | orchestrator | export IMAGE_USER=ubuntu 2025-05-28 18:42:15.166904 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-28 18:42:15.166914 | orchestrator | 2025-05-28 18:42:15.166927 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-28 18:42:15.167076 | orchestrator | 2025-05-28 18:42:15.167093 | orchestrator | + echo 2025-05-28 18:42:15.167103 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 18:42:15.168060 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 18:42:15.168078 | orchestrator | ++ INTERACTIVE=false 2025-05-28 18:42:15.168090 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 18:42:15.168100 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 18:42:15.168115 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 18:42:15.168256 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 18:42:15.168271 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 18:42:15.168281 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 18:42:15.168291 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 18:42:15.168301 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 18:42:15.168311 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 18:42:15.168321 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-28 18:42:15.168339 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-28 18:42:15.168349 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 18:42:15.168359 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 18:42:15.168372 | orchestrator | ++ export ARA=false 2025-05-28 18:42:15.168383 | orchestrator | ++ ARA=false 2025-05-28 18:42:15.168400 | orchestrator | ++ export TEMPEST=false 2025-05-28 18:42:15.168410 | orchestrator | ++ TEMPEST=false 2025-05-28 18:42:15.168464 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 18:42:15.168484 | orchestrator | ++ IS_ZUUL=true 2025-05-28 18:42:15.168500 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.158 2025-05-28 18:42:15.168519 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.158 2025-05-28 18:42:15.168534 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 18:42:15.168545 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 18:42:15.168554 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 18:42:15.168564 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 18:42:15.168573 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 18:42:15.168583 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 18:42:15.168592 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 18:42:15.168602 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 18:42:15.168616 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-28 18:42:15.217833 | orchestrator | + docker version 2025-05-28 18:42:15.484152 | orchestrator | Client: Docker Engine - Community 2025-05-28 18:42:15.484269 | orchestrator | Version: 26.1.4 2025-05-28 18:42:15.484291 | orchestrator | API version: 1.45 2025-05-28 18:42:15.484303 | orchestrator | Go version: go1.21.11 2025-05-28 18:42:15.484314 | orchestrator | Git commit: 5650f9b 2025-05-28 18:42:15.484325 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-28 18:42:15.484338 | orchestrator | OS/Arch: linux/amd64 2025-05-28 18:42:15.484349 | orchestrator | Context: default 2025-05-28 18:42:15.484360 | orchestrator | 2025-05-28 18:42:15.484372 | orchestrator | Server: Docker Engine - Community 2025-05-28 18:42:15.484397 | orchestrator | Engine: 2025-05-28 18:42:15.484410 | orchestrator | Version: 26.1.4 2025-05-28 18:42:15.484463 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-28 18:42:15.484475 | orchestrator | Go version: go1.21.11 2025-05-28 18:42:15.484486 | orchestrator | Git commit: de5c9cf 2025-05-28 18:42:15.484528 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-28 18:42:15.484540 | orchestrator | OS/Arch: linux/amd64 2025-05-28 18:42:15.484551 | orchestrator | Experimental: false 2025-05-28 18:42:15.484563 | orchestrator | containerd: 2025-05-28 18:42:15.484574 | orchestrator | Version: 1.7.27 2025-05-28 18:42:15.484585 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-28 18:42:15.484596 | orchestrator | runc: 2025-05-28 18:42:15.484607 | orchestrator | Version: 1.2.5 2025-05-28 18:42:15.484618 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-28 18:42:15.484629 | orchestrator | docker-init: 2025-05-28 18:42:15.484640 | orchestrator | Version: 0.19.0 2025-05-28 18:42:15.484652 | orchestrator | GitCommit: de40ad0 2025-05-28 18:42:15.487283 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-28 18:42:15.496461 | orchestrator | + set -e 2025-05-28 18:42:15.496531 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 18:42:15.496546 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 18:42:15.496560 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 18:42:15.496571 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 18:42:15.496582 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 18:42:15.496594 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 18:42:15.496607 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 18:42:15.496625 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-28 18:42:15.496637 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-28 18:42:15.496657 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 18:42:15.496669 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 18:42:15.496680 | orchestrator | ++ export ARA=false 2025-05-28 18:42:15.496699 | orchestrator | ++ ARA=false 2025-05-28 18:42:15.496710 | orchestrator | ++ export TEMPEST=false 2025-05-28 18:42:15.496721 | orchestrator | ++ TEMPEST=false 2025-05-28 18:42:15.496732 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 18:42:15.496743 | orchestrator | ++ IS_ZUUL=true 2025-05-28 18:42:15.496758 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.158 2025-05-28 18:42:15.496770 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.158 2025-05-28 18:42:15.496781 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 18:42:15.496791 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 18:42:15.496802 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 18:42:15.496813 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 18:42:15.496824 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 18:42:15.496835 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 18:42:15.496845 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 18:42:15.496856 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 18:42:15.496866 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 18:42:15.496881 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 18:42:15.496899 | orchestrator | ++ INTERACTIVE=false 2025-05-28 18:42:15.496933 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 18:42:15.496954 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 18:42:15.496974 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-28 18:42:15.496992 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-28 18:42:15.504467 | orchestrator | + set -e 2025-05-28 18:42:15.504504 | orchestrator | + VERSION=8.1.0 2025-05-28 18:42:15.504522 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-28 18:42:15.512187 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-28 18:42:15.512230 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-28 18:42:15.516932 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-28 18:42:15.520479 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-28 18:42:15.528891 | orchestrator | /opt/configuration ~ 2025-05-28 18:42:15.528945 | orchestrator | + set -e 2025-05-28 18:42:15.528958 | orchestrator | + pushd /opt/configuration 2025-05-28 18:42:15.528969 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-28 18:42:15.531374 | orchestrator | + source /opt/venv/bin/activate 2025-05-28 18:42:15.532613 | orchestrator | ++ deactivate nondestructive 2025-05-28 18:42:15.532645 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:15.532657 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:15.532669 | orchestrator | ++ hash -r 2025-05-28 18:42:15.532680 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:15.532691 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-28 18:42:15.532702 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-28 18:42:15.532713 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-28 18:42:15.532761 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-28 18:42:15.532773 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-28 18:42:15.532785 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-28 18:42:15.532797 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-28 18:42:15.532816 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 18:42:15.532828 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 18:42:15.532839 | orchestrator | ++ export PATH 2025-05-28 18:42:15.532850 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:15.532866 | orchestrator | ++ '[' -z '' ']' 2025-05-28 18:42:15.532878 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-28 18:42:15.532958 | orchestrator | ++ PS1='(venv) ' 2025-05-28 18:42:15.532973 | orchestrator | ++ export PS1 2025-05-28 18:42:15.532992 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-28 18:42:15.533003 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-28 18:42:15.533014 | orchestrator | ++ hash -r 2025-05-28 18:42:15.533042 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-28 18:42:16.580943 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-28 18:42:16.581737 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-28 18:42:16.582900 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-28 18:42:16.584061 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-28 18:42:16.585461 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-28 18:42:16.595079 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-05-28 18:42:16.596363 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-28 18:42:16.597628 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-28 18:42:16.598733 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-28 18:42:16.629918 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-28 18:42:16.631258 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-28 18:42:16.632728 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-28 18:42:16.634495 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-28 18:42:16.638349 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-28 18:42:16.841696 | orchestrator | ++ which gilt 2025-05-28 18:42:16.845326 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-28 18:42:16.845385 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-28 18:42:17.062408 | orchestrator | osism.cfg-generics: 2025-05-28 18:42:17.062536 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-28 18:42:18.641366 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-28 18:42:18.641529 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-28 18:42:18.641559 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-28 18:42:18.641588 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-28 18:42:19.566727 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-28 18:42:19.575490 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-28 18:42:19.916521 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-28 18:42:19.967738 | orchestrator | ~ 2025-05-28 18:42:19.967816 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-28 18:42:19.967830 | orchestrator | + deactivate 2025-05-28 18:42:19.967843 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-28 18:42:19.967857 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 18:42:19.967868 | orchestrator | + export PATH 2025-05-28 18:42:19.967879 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-28 18:42:19.967892 | orchestrator | + '[' -n '' ']' 2025-05-28 18:42:19.967903 | orchestrator | + hash -r 2025-05-28 18:42:19.967913 | orchestrator | + '[' -n '' ']' 2025-05-28 18:42:19.967924 | orchestrator | + unset VIRTUAL_ENV 2025-05-28 18:42:19.967935 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-28 18:42:19.967946 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-28 18:42:19.967957 | orchestrator | + unset -f deactivate 2025-05-28 18:42:19.967968 | orchestrator | + popd 2025-05-28 18:42:19.969237 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-28 18:42:19.969255 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-28 18:42:19.970200 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-28 18:42:20.024963 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-28 18:42:20.025056 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-28 18:42:20.025073 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-28 18:42:20.069556 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-28 18:42:20.069627 | orchestrator | + source /opt/venv/bin/activate 2025-05-28 18:42:20.069639 | orchestrator | ++ deactivate nondestructive 2025-05-28 18:42:20.069657 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:20.069669 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:20.069680 | orchestrator | ++ hash -r 2025-05-28 18:42:20.069691 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:20.069702 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-28 18:42:20.069714 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-28 18:42:20.069725 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-28 18:42:20.069748 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-28 18:42:20.069766 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-28 18:42:20.069777 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-28 18:42:20.069788 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-28 18:42:20.069800 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 18:42:20.069812 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 18:42:20.069830 | orchestrator | ++ export PATH 2025-05-28 18:42:20.069841 | orchestrator | ++ '[' -n '' ']' 2025-05-28 18:42:20.069853 | orchestrator | ++ '[' -z '' ']' 2025-05-28 18:42:20.069864 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-28 18:42:20.069875 | orchestrator | ++ PS1='(venv) ' 2025-05-28 18:42:20.069886 | orchestrator | ++ export PS1 2025-05-28 18:42:20.069898 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-28 18:42:20.069909 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-28 18:42:20.069920 | orchestrator | ++ hash -r 2025-05-28 18:42:20.069936 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-28 18:42:21.228317 | orchestrator | 2025-05-28 18:42:21.228512 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-28 18:42:21.228527 | orchestrator | 2025-05-28 18:42:21.228534 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-28 18:42:21.815133 | orchestrator | ok: [testbed-manager] 2025-05-28 18:42:21.815257 | orchestrator | 2025-05-28 18:42:21.815272 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-28 18:42:22.839211 | orchestrator | changed: [testbed-manager] 2025-05-28 18:42:22.839338 | orchestrator | 2025-05-28 18:42:22.839354 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-28 18:42:22.839367 | orchestrator | 2025-05-28 18:42:22.839380 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:42:25.151036 | orchestrator | ok: [testbed-manager] 2025-05-28 18:42:25.151163 | orchestrator | 2025-05-28 18:42:25.151177 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-28 18:42:30.392808 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-28 18:42:30.392922 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-28 18:42:30.392938 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-28 18:42:30.392950 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-28 18:42:30.392961 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-28 18:42:30.392976 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-28 18:42:30.392988 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-28 18:42:30.393002 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-28 18:42:30.393013 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-28 18:42:30.393024 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-28 18:42:30.393036 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-28 18:42:30.393047 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-28 18:42:30.393058 | orchestrator | 2025-05-28 18:42:30.393070 | orchestrator | TASK [Check status] ************************************************************ 2025-05-28 18:43:46.972261 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-28 18:43:46.972417 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-28 18:43:46.972477 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-28 18:43:46.972504 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j184861141753.1591', 'results_file': '/home/dragon/.ansible_async/j184861141753.1591', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972524 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j715032993398.1616', 'results_file': '/home/dragon/.ansible_async/j715032993398.1616', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972544 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-28 18:43:46.972556 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-28 18:43:46.972568 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j715237601489.1641', 'results_file': '/home/dragon/.ansible_async/j715237601489.1641', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972579 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j915598348882.1673', 'results_file': '/home/dragon/.ansible_async/j915598348882.1673', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972590 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-28 18:43:46.972601 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-28 18:43:46.972613 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j205569755538.1710', 'results_file': '/home/dragon/.ansible_async/j205569755538.1710', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972625 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j899862487675.1742', 'results_file': '/home/dragon/.ansible_async/j899862487675.1742', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972675 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j574682457284.1782', 'results_file': '/home/dragon/.ansible_async/j574682457284.1782', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972687 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j363775617991.1807', 'results_file': '/home/dragon/.ansible_async/j363775617991.1807', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972698 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j133806211197.1839', 'results_file': '/home/dragon/.ansible_async/j133806211197.1839', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972710 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j527035447764.1879', 'results_file': '/home/dragon/.ansible_async/j527035447764.1879', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972721 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j504617019818.1910', 'results_file': '/home/dragon/.ansible_async/j504617019818.1910', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972732 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j790828874167.1941', 'results_file': '/home/dragon/.ansible_async/j790828874167.1941', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-28 18:43:46.972743 | orchestrator | 2025-05-28 18:43:46.972756 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-28 18:43:47.021270 | orchestrator | ok: [testbed-manager] 2025-05-28 18:43:47.021322 | orchestrator | 2025-05-28 18:43:47.021340 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-28 18:43:47.491254 | orchestrator | changed: [testbed-manager] 2025-05-28 18:43:47.491387 | orchestrator | 2025-05-28 18:43:47.491404 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-28 18:43:47.844396 | orchestrator | changed: [testbed-manager] 2025-05-28 18:43:47.844572 | orchestrator | 2025-05-28 18:43:47.844588 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-28 18:43:48.187520 | orchestrator | changed: [testbed-manager] 2025-05-28 18:43:48.187631 | orchestrator | 2025-05-28 18:43:48.187646 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-28 18:43:48.229618 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:43:48.229705 | orchestrator | 2025-05-28 18:43:48.229720 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-28 18:43:48.739015 | orchestrator | ok: [testbed-manager] 2025-05-28 18:43:48.739114 | orchestrator | 2025-05-28 18:43:48.739129 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-28 18:43:48.850247 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:43:48.850350 | orchestrator | 2025-05-28 18:43:48.850365 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-28 18:43:48.850378 | orchestrator | 2025-05-28 18:43:48.850390 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:43:50.804941 | orchestrator | ok: [testbed-manager] 2025-05-28 18:43:50.805052 | orchestrator | 2025-05-28 18:43:50.805069 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-28 18:43:50.908617 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-28 18:43:50.908706 | orchestrator | 2025-05-28 18:43:50.908719 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-28 18:43:50.978662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-28 18:43:50.978775 | orchestrator | 2025-05-28 18:43:50.978789 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-28 18:43:52.105635 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-28 18:43:52.106461 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-28 18:43:52.106493 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-28 18:43:52.106508 | orchestrator | 2025-05-28 18:43:52.106522 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-28 18:43:53.959020 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-28 18:43:53.959169 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-28 18:43:53.959186 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-28 18:43:53.959197 | orchestrator | 2025-05-28 18:43:53.959209 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-28 18:43:54.609817 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:43:54.609945 | orchestrator | changed: [testbed-manager] 2025-05-28 18:43:54.609962 | orchestrator | 2025-05-28 18:43:54.610002 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-28 18:43:55.270621 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:43:55.270781 | orchestrator | changed: [testbed-manager] 2025-05-28 18:43:55.270802 | orchestrator | 2025-05-28 18:43:55.270815 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-28 18:43:55.331232 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:43:55.331362 | orchestrator | 2025-05-28 18:43:55.331378 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-28 18:43:55.709264 | orchestrator | ok: [testbed-manager] 2025-05-28 18:43:55.709393 | orchestrator | 2025-05-28 18:43:55.709410 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-28 18:43:55.771521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-28 18:43:55.771663 | orchestrator | 2025-05-28 18:43:55.771688 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-28 18:43:56.774911 | orchestrator | changed: [testbed-manager] 2025-05-28 18:43:56.775058 | orchestrator | 2025-05-28 18:43:56.775086 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-28 18:43:57.577245 | orchestrator | changed: [testbed-manager] 2025-05-28 18:43:57.577393 | orchestrator | 2025-05-28 18:43:57.577409 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-28 18:44:01.318300 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:01.318416 | orchestrator | 2025-05-28 18:44:01.318501 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-28 18:44:01.425114 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-28 18:44:01.425219 | orchestrator | 2025-05-28 18:44:01.425240 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-28 18:44:01.489690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 18:44:01.489791 | orchestrator | 2025-05-28 18:44:01.489808 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-28 18:44:04.182298 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:04.182522 | orchestrator | 2025-05-28 18:44:04.182553 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-28 18:44:04.300705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-28 18:44:04.300819 | orchestrator | 2025-05-28 18:44:04.300836 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-28 18:44:05.422776 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-28 18:44:05.422886 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-28 18:44:05.422904 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-28 18:44:05.422948 | orchestrator | 2025-05-28 18:44:05.422963 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-28 18:44:05.501762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-28 18:44:05.501871 | orchestrator | 2025-05-28 18:44:05.501887 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-28 18:44:06.133917 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-28 18:44:06.134072 | orchestrator | 2025-05-28 18:44:06.134082 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-28 18:44:06.774273 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:06.774393 | orchestrator | 2025-05-28 18:44:06.774411 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-28 18:44:07.402376 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:44:07.402534 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:07.402549 | orchestrator | 2025-05-28 18:44:07.402558 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-28 18:44:07.789042 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:07.789181 | orchestrator | 2025-05-28 18:44:07.789209 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-28 18:44:08.129915 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:08.129996 | orchestrator | 2025-05-28 18:44:08.130005 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-28 18:44:08.171069 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:08.171141 | orchestrator | 2025-05-28 18:44:08.171148 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-28 18:44:08.809410 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:08.809576 | orchestrator | 2025-05-28 18:44:08.809600 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-28 18:44:08.880388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-28 18:44:08.880523 | orchestrator | 2025-05-28 18:44:08.880539 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-28 18:44:09.650248 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-28 18:44:09.650358 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-28 18:44:09.650374 | orchestrator | 2025-05-28 18:44:09.650387 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-28 18:44:10.331473 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-28 18:44:10.331567 | orchestrator | 2025-05-28 18:44:10.331580 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-28 18:44:11.016274 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:11.016356 | orchestrator | 2025-05-28 18:44:11.016364 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-28 18:44:11.058238 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:11.058312 | orchestrator | 2025-05-28 18:44:11.058320 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-28 18:44:11.695129 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:11.695232 | orchestrator | 2025-05-28 18:44:11.695239 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-28 18:44:13.552016 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:44:13.552150 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:44:13.552164 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:44:13.552177 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:13.552190 | orchestrator | 2025-05-28 18:44:13.552203 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-28 18:44:19.721618 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-28 18:44:19.721773 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-28 18:44:19.721791 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-28 18:44:19.721802 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-28 18:44:19.721851 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-28 18:44:19.721863 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-28 18:44:19.721874 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-28 18:44:19.721907 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-28 18:44:19.721920 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-28 18:44:19.721932 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-28 18:44:19.721943 | orchestrator | 2025-05-28 18:44:19.721955 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-28 18:44:20.435899 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-28 18:44:20.436034 | orchestrator | 2025-05-28 18:44:20.436051 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-28 18:44:20.523502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-28 18:44:20.523621 | orchestrator | 2025-05-28 18:44:20.523637 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-28 18:44:21.240898 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:21.240998 | orchestrator | 2025-05-28 18:44:21.241006 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-28 18:44:21.877929 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:21.878126 | orchestrator | 2025-05-28 18:44:21.878145 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-28 18:44:22.636074 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:22.636194 | orchestrator | 2025-05-28 18:44:22.636208 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-28 18:44:24.986336 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:24.986520 | orchestrator | 2025-05-28 18:44:24.986538 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-28 18:44:25.933120 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:25.933232 | orchestrator | 2025-05-28 18:44:25.933239 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-28 18:44:48.136919 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-28 18:44:48.137063 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:48.137080 | orchestrator | 2025-05-28 18:44:48.137093 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-28 18:44:48.203469 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:48.203578 | orchestrator | 2025-05-28 18:44:48.203593 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-28 18:44:48.203606 | orchestrator | 2025-05-28 18:44:48.203618 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-28 18:44:48.242887 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:48.242953 | orchestrator | 2025-05-28 18:44:48.242970 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-28 18:44:48.325655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-28 18:44:48.325731 | orchestrator | 2025-05-28 18:44:48.325745 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-28 18:44:49.132980 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:49.133088 | orchestrator | 2025-05-28 18:44:49.133104 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-28 18:44:49.205816 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:49.205938 | orchestrator | 2025-05-28 18:44:49.205953 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-28 18:44:49.251812 | orchestrator | ok: [testbed-manager] => { 2025-05-28 18:44:49.251933 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-28 18:44:49.251950 | orchestrator | } 2025-05-28 18:44:49.251963 | orchestrator | 2025-05-28 18:44:49.251974 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-28 18:44:49.888868 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:49.888999 | orchestrator | 2025-05-28 18:44:49.889015 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-28 18:44:50.763910 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:50.764016 | orchestrator | 2025-05-28 18:44:50.764032 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-28 18:44:50.834212 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:50.834313 | orchestrator | 2025-05-28 18:44:50.834328 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-28 18:44:50.873617 | orchestrator | ok: [testbed-manager] => { 2025-05-28 18:44:50.873699 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-28 18:44:50.873714 | orchestrator | } 2025-05-28 18:44:50.873726 | orchestrator | 2025-05-28 18:44:50.873737 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-28 18:44:50.938135 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:50.938211 | orchestrator | 2025-05-28 18:44:50.938224 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-28 18:44:50.994409 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:50.994527 | orchestrator | 2025-05-28 18:44:50.994540 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-28 18:44:51.060601 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:51.060690 | orchestrator | 2025-05-28 18:44:51.060703 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-28 18:44:51.113719 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:51.113805 | orchestrator | 2025-05-28 18:44:51.113818 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-28 18:44:51.173801 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:51.173872 | orchestrator | 2025-05-28 18:44:51.173884 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-28 18:44:51.230757 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:44:51.230820 | orchestrator | 2025-05-28 18:44:51.230837 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-28 18:44:52.775916 | orchestrator | changed: [testbed-manager] 2025-05-28 18:44:52.776032 | orchestrator | 2025-05-28 18:44:52.776049 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-28 18:44:52.846961 | orchestrator | ok: [testbed-manager] 2025-05-28 18:44:52.847069 | orchestrator | 2025-05-28 18:44:52.847103 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-28 18:45:52.904166 | orchestrator | Pausing for 60 seconds 2025-05-28 18:45:52.904253 | orchestrator | changed: [testbed-manager] 2025-05-28 18:45:52.904269 | orchestrator | 2025-05-28 18:45:52.904287 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-28 18:45:52.958257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-28 18:45:52.958327 | orchestrator | 2025-05-28 18:45:52.958342 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-28 18:50:05.045932 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-28 18:50:05.046198 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-28 18:50:05.046230 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-28 18:50:05.046250 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-28 18:50:05.046264 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-28 18:50:05.046276 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-28 18:50:05.046287 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-28 18:50:05.046297 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-28 18:50:05.046308 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-28 18:50:05.046355 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-28 18:50:05.046366 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-28 18:50:05.046378 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-28 18:50:05.046415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-28 18:50:05.046429 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-28 18:50:05.046442 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-28 18:50:05.046459 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-28 18:50:05.046472 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-28 18:50:05.046484 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-28 18:50:05.046497 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-28 18:50:05.046510 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-28 18:50:05.046523 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-28 18:50:05.046536 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-28 18:50:05.046549 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-28 18:50:05.046561 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-28 18:50:05.046573 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:05.046587 | orchestrator | 2025-05-28 18:50:05.046599 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-28 18:50:05.046610 | orchestrator | 2025-05-28 18:50:05.046621 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:50:07.251964 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:07.252093 | orchestrator | 2025-05-28 18:50:07.252111 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-28 18:50:07.387999 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-28 18:50:07.388124 | orchestrator | 2025-05-28 18:50:07.388147 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-28 18:50:07.460049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 18:50:07.460170 | orchestrator | 2025-05-28 18:50:07.460187 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-28 18:50:09.656159 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:09.656297 | orchestrator | 2025-05-28 18:50:09.656315 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-28 18:50:09.722318 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:09.722470 | orchestrator | 2025-05-28 18:50:09.722487 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-28 18:50:09.835602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-28 18:50:09.835665 | orchestrator | 2025-05-28 18:50:09.835679 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-28 18:50:12.926295 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-28 18:50:12.926434 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-28 18:50:12.926449 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-28 18:50:12.926462 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-28 18:50:12.926506 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-28 18:50:12.926519 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-28 18:50:12.926530 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-28 18:50:12.926541 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-28 18:50:12.926553 | orchestrator | 2025-05-28 18:50:12.926566 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-05-28 18:50:13.660899 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:13.661026 | orchestrator | 2025-05-28 18:50:13.661041 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-28 18:50:14.358991 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:14.359121 | orchestrator | 2025-05-28 18:50:14.359135 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-28 18:50:14.452910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-28 18:50:14.453066 | orchestrator | 2025-05-28 18:50:14.453086 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-28 18:50:15.770321 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-28 18:50:15.770527 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-28 18:50:15.770546 | orchestrator | 2025-05-28 18:50:15.770561 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-28 18:50:16.457276 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:16.457473 | orchestrator | 2025-05-28 18:50:16.457493 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-28 18:50:16.512034 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:50:16.512112 | orchestrator | 2025-05-28 18:50:16.512126 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-28 18:50:16.572370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-28 18:50:16.572482 | orchestrator | 2025-05-28 18:50:16.572496 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-28 18:50:18.203489 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:50:18.203630 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:50:18.203647 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:18.203662 | orchestrator | 2025-05-28 18:50:18.203675 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-28 18:50:18.853322 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:18.853507 | orchestrator | 2025-05-28 18:50:18.853525 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-28 18:50:18.945381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-28 18:50:18.945501 | orchestrator | 2025-05-28 18:50:18.945516 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-28 18:50:20.245533 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:50:20.245666 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 18:50:20.245680 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:20.245694 | orchestrator | 2025-05-28 18:50:20.245707 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-28 18:50:20.895924 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:20.896058 | orchestrator | 2025-05-28 18:50:20.896074 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-28 18:50:21.559103 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:21.559229 | orchestrator | 2025-05-28 18:50:21.559245 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-28 18:50:21.677773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-28 18:50:21.677891 | orchestrator | 2025-05-28 18:50:21.677906 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-28 18:50:22.306871 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:22.307024 | orchestrator | 2025-05-28 18:50:22.307042 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-28 18:50:22.742338 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:22.742514 | orchestrator | 2025-05-28 18:50:22.742533 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-28 18:50:24.130487 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-28 18:50:24.130626 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-28 18:50:24.130643 | orchestrator | 2025-05-28 18:50:24.130656 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-28 18:50:24.909255 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:24.909384 | orchestrator | 2025-05-28 18:50:24.909438 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-28 18:50:25.349954 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:25.350141 | orchestrator | 2025-05-28 18:50:25.350159 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-28 18:50:25.738602 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:25.738737 | orchestrator | 2025-05-28 18:50:25.738755 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-28 18:50:25.794807 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:50:25.794953 | orchestrator | 2025-05-28 18:50:25.794978 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-28 18:50:25.889654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-28 18:50:25.889758 | orchestrator | 2025-05-28 18:50:25.889773 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-28 18:50:25.950339 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:25.950442 | orchestrator | 2025-05-28 18:50:25.950456 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-28 18:50:28.098562 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-28 18:50:28.098665 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-28 18:50:28.098680 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-28 18:50:28.098691 | orchestrator | 2025-05-28 18:50:28.098705 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-28 18:50:28.939472 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:28.939599 | orchestrator | 2025-05-28 18:50:28.939615 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-28 18:50:29.727077 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:29.727214 | orchestrator | 2025-05-28 18:50:29.727231 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-28 18:50:30.522070 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:30.522203 | orchestrator | 2025-05-28 18:50:30.522221 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-28 18:50:30.638727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-28 18:50:30.638846 | orchestrator | 2025-05-28 18:50:30.638861 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-28 18:50:30.701012 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:30.701134 | orchestrator | 2025-05-28 18:50:30.701149 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-28 18:50:31.457145 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-28 18:50:31.457279 | orchestrator | 2025-05-28 18:50:31.457297 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-28 18:50:31.564057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-28 18:50:31.564181 | orchestrator | 2025-05-28 18:50:31.564199 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-28 18:50:32.372303 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:32.372475 | orchestrator | 2025-05-28 18:50:32.372491 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-28 18:50:33.053987 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:33.054168 | orchestrator | 2025-05-28 18:50:33.054188 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-28 18:50:33.106729 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:50:33.106807 | orchestrator | 2025-05-28 18:50:33.106823 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-28 18:50:33.171500 | orchestrator | ok: [testbed-manager] 2025-05-28 18:50:33.171568 | orchestrator | 2025-05-28 18:50:33.171584 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-28 18:50:34.206567 | orchestrator | changed: [testbed-manager] 2025-05-28 18:50:34.206687 | orchestrator | 2025-05-28 18:50:34.206702 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-28 18:51:16.724851 | orchestrator | changed: [testbed-manager] 2025-05-28 18:51:16.724992 | orchestrator | 2025-05-28 18:51:16.725010 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-28 18:51:17.556665 | orchestrator | ok: [testbed-manager] 2025-05-28 18:51:17.556793 | orchestrator | 2025-05-28 18:51:17.556811 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-28 18:51:20.586510 | orchestrator | changed: [testbed-manager] 2025-05-28 18:51:20.586634 | orchestrator | 2025-05-28 18:51:20.586650 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-28 18:51:20.668108 | orchestrator | ok: [testbed-manager] 2025-05-28 18:51:20.668206 | orchestrator | 2025-05-28 18:51:20.668221 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-28 18:51:20.668234 | orchestrator | 2025-05-28 18:51:20.668245 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-28 18:51:20.735091 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:51:20.735182 | orchestrator | 2025-05-28 18:51:20.735197 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-28 18:52:20.789720 | orchestrator | Pausing for 60 seconds 2025-05-28 18:52:20.789867 | orchestrator | changed: [testbed-manager] 2025-05-28 18:52:20.789897 | orchestrator | 2025-05-28 18:52:20.789922 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-28 18:52:26.258609 | orchestrator | changed: [testbed-manager] 2025-05-28 18:52:26.258710 | orchestrator | 2025-05-28 18:52:26.258722 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-28 18:53:07.847502 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-28 18:53:07.847622 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-28 18:53:07.847638 | orchestrator | changed: [testbed-manager] 2025-05-28 18:53:07.847652 | orchestrator | 2025-05-28 18:53:07.847665 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-28 18:53:13.176343 | orchestrator | changed: [testbed-manager] 2025-05-28 18:53:13.176454 | orchestrator | 2025-05-28 18:53:13.176471 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-28 18:53:13.268006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-28 18:53:13.268102 | orchestrator | 2025-05-28 18:53:13.268117 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-28 18:53:13.268131 | orchestrator | 2025-05-28 18:53:13.268156 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-28 18:53:13.318165 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:53:13.318297 | orchestrator | 2025-05-28 18:53:13.318312 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:53:13.318326 | orchestrator | testbed-manager : ok=111 changed=59 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-28 18:53:13.318338 | orchestrator | 2025-05-28 18:53:13.432474 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-28 18:53:13.432564 | orchestrator | + deactivate 2025-05-28 18:53:13.432579 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-28 18:53:13.432593 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-28 18:53:13.432636 | orchestrator | + export PATH 2025-05-28 18:53:13.432648 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-28 18:53:13.432660 | orchestrator | + '[' -n '' ']' 2025-05-28 18:53:13.432672 | orchestrator | + hash -r 2025-05-28 18:53:13.432683 | orchestrator | + '[' -n '' ']' 2025-05-28 18:53:13.432694 | orchestrator | + unset VIRTUAL_ENV 2025-05-28 18:53:13.432705 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-28 18:53:13.432716 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-28 18:53:13.432727 | orchestrator | + unset -f deactivate 2025-05-28 18:53:13.432739 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-28 18:53:13.437932 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-28 18:53:13.437982 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-28 18:53:13.437999 | orchestrator | + local max_attempts=60 2025-05-28 18:53:13.438068 | orchestrator | + local name=ceph-ansible 2025-05-28 18:53:13.438083 | orchestrator | + local attempt_num=1 2025-05-28 18:53:13.438867 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-28 18:53:13.468946 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 18:53:13.469097 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-28 18:53:13.469113 | orchestrator | + local max_attempts=60 2025-05-28 18:53:13.469126 | orchestrator | + local name=kolla-ansible 2025-05-28 18:53:13.469138 | orchestrator | + local attempt_num=1 2025-05-28 18:53:13.469158 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-28 18:53:13.501520 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 18:53:13.501571 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-28 18:53:13.501586 | orchestrator | + local max_attempts=60 2025-05-28 18:53:13.501599 | orchestrator | + local name=osism-ansible 2025-05-28 18:53:13.501611 | orchestrator | + local attempt_num=1 2025-05-28 18:53:13.502572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-28 18:53:13.538213 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 18:53:13.538317 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-28 18:53:13.538331 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-28 18:53:14.209194 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-28 18:53:14.413589 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-28 18:53:14.413707 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413724 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413760 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-28 18:53:14.413780 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-28 18:53:14.413792 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413803 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413814 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413825 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-28 18:53:14.413858 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413870 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-28 18:53:14.413881 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413892 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413903 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-28 18:53:14.413914 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413925 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413935 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.413946 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-28 18:53:14.419633 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-28 18:53:14.559401 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-28 18:53:14.559501 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-28 18:53:14.559515 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-28 18:53:14.559529 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-28 18:53:14.559542 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-28 18:53:14.566837 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-28 18:53:14.626091 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-28 18:53:14.626192 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-28 18:53:14.630693 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-28 18:53:16.063799 | orchestrator | 2025-05-28 18:53:16 | INFO  | Task db5aac59-b572-4de6-8ddb-7a9b5f0fcc7c (resolvconf) was prepared for execution. 2025-05-28 18:53:16.063896 | orchestrator | 2025-05-28 18:53:16 | INFO  | It takes a moment until task db5aac59-b572-4de6-8ddb-7a9b5f0fcc7c (resolvconf) has been started and output is visible here. 2025-05-28 18:53:18.895187 | orchestrator | 2025-05-28 18:53:18.895354 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-28 18:53:18.895371 | orchestrator | 2025-05-28 18:53:18.895644 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:53:18.896069 | orchestrator | Wednesday 28 May 2025 18:53:18 +0000 (0:00:00.089) 0:00:00.089 ********* 2025-05-28 18:53:22.944437 | orchestrator | ok: [testbed-manager] 2025-05-28 18:53:22.945149 | orchestrator | 2025-05-28 18:53:22.945884 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-28 18:53:22.946390 | orchestrator | Wednesday 28 May 2025 18:53:22 +0000 (0:00:04.054) 0:00:04.144 ********* 2025-05-28 18:53:22.990216 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:53:22.990315 | orchestrator | 2025-05-28 18:53:22.991898 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-28 18:53:22.991924 | orchestrator | Wednesday 28 May 2025 18:53:22 +0000 (0:00:00.046) 0:00:04.190 ********* 2025-05-28 18:53:23.078713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-28 18:53:23.079127 | orchestrator | 2025-05-28 18:53:23.081094 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-28 18:53:23.081686 | orchestrator | Wednesday 28 May 2025 18:53:23 +0000 (0:00:00.086) 0:00:04.277 ********* 2025-05-28 18:53:23.165515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 18:53:23.166467 | orchestrator | 2025-05-28 18:53:23.166690 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-28 18:53:23.167781 | orchestrator | Wednesday 28 May 2025 18:53:23 +0000 (0:00:00.087) 0:00:04.364 ********* 2025-05-28 18:53:24.089957 | orchestrator | ok: [testbed-manager] 2025-05-28 18:53:24.090667 | orchestrator | 2025-05-28 18:53:24.090863 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-28 18:53:24.091323 | orchestrator | Wednesday 28 May 2025 18:53:24 +0000 (0:00:00.924) 0:00:05.289 ********* 2025-05-28 18:53:24.130370 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:53:24.130544 | orchestrator | 2025-05-28 18:53:24.131594 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-28 18:53:24.131949 | orchestrator | Wednesday 28 May 2025 18:53:24 +0000 (0:00:00.040) 0:00:05.330 ********* 2025-05-28 18:53:24.562986 | orchestrator | ok: [testbed-manager] 2025-05-28 18:53:24.563657 | orchestrator | 2025-05-28 18:53:24.564269 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-28 18:53:24.564927 | orchestrator | Wednesday 28 May 2025 18:53:24 +0000 (0:00:00.432) 0:00:05.762 ********* 2025-05-28 18:53:24.637963 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:53:24.638213 | orchestrator | 2025-05-28 18:53:24.639347 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-28 18:53:24.640013 | orchestrator | Wednesday 28 May 2025 18:53:24 +0000 (0:00:00.075) 0:00:05.837 ********* 2025-05-28 18:53:25.139927 | orchestrator | changed: [testbed-manager] 2025-05-28 18:53:25.140071 | orchestrator | 2025-05-28 18:53:25.140085 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-28 18:53:25.140297 | orchestrator | Wednesday 28 May 2025 18:53:25 +0000 (0:00:00.499) 0:00:06.337 ********* 2025-05-28 18:53:26.270973 | orchestrator | changed: [testbed-manager] 2025-05-28 18:53:26.271263 | orchestrator | 2025-05-28 18:53:26.272089 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-28 18:53:26.272445 | orchestrator | Wednesday 28 May 2025 18:53:26 +0000 (0:00:01.132) 0:00:07.470 ********* 2025-05-28 18:53:27.204456 | orchestrator | ok: [testbed-manager] 2025-05-28 18:53:27.204588 | orchestrator | 2025-05-28 18:53:27.205057 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-28 18:53:27.205095 | orchestrator | Wednesday 28 May 2025 18:53:27 +0000 (0:00:00.932) 0:00:08.402 ********* 2025-05-28 18:53:27.292473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-28 18:53:27.292623 | orchestrator | 2025-05-28 18:53:27.293358 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-28 18:53:27.294154 | orchestrator | Wednesday 28 May 2025 18:53:27 +0000 (0:00:00.089) 0:00:08.492 ********* 2025-05-28 18:53:28.459481 | orchestrator | changed: [testbed-manager] 2025-05-28 18:53:28.460465 | orchestrator | 2025-05-28 18:53:28.461912 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:53:28.461944 | orchestrator | 2025-05-28 18:53:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 18:53:28.461958 | orchestrator | 2025-05-28 18:53:28 | INFO  | Please wait and do not abort execution. 2025-05-28 18:53:28.462927 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 18:53:28.464164 | orchestrator | 2025-05-28 18:53:28.465198 | orchestrator | Wednesday 28 May 2025 18:53:28 +0000 (0:00:01.165) 0:00:09.657 ********* 2025-05-28 18:53:28.466405 | orchestrator | =============================================================================== 2025-05-28 18:53:28.467134 | orchestrator | Gathering Facts --------------------------------------------------------- 4.05s 2025-05-28 18:53:28.468287 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-05-28 18:53:28.468671 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2025-05-28 18:53:28.469720 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2025-05-28 18:53:28.470148 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.92s 2025-05-28 18:53:28.470958 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-05-28 18:53:28.471624 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.43s 2025-05-28 18:53:28.472160 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-28 18:53:28.472584 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-28 18:53:28.472913 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-28 18:53:28.473579 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-28 18:53:28.474010 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-05-28 18:53:28.474481 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2025-05-28 18:53:28.758073 | orchestrator | + osism apply sshconfig 2025-05-28 18:53:30.066351 | orchestrator | 2025-05-28 18:53:30 | INFO  | Task 50f2e940-5f04-4ce3-ae82-6c4875953c78 (sshconfig) was prepared for execution. 2025-05-28 18:53:30.066487 | orchestrator | 2025-05-28 18:53:30 | INFO  | It takes a moment until task 50f2e940-5f04-4ce3-ae82-6c4875953c78 (sshconfig) has been started and output is visible here. 2025-05-28 18:53:32.873419 | orchestrator | 2025-05-28 18:53:32.876318 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-28 18:53:32.876691 | orchestrator | 2025-05-28 18:53:32.876971 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-28 18:53:32.877629 | orchestrator | Wednesday 28 May 2025 18:53:32 +0000 (0:00:00.102) 0:00:00.102 ********* 2025-05-28 18:53:33.398563 | orchestrator | ok: [testbed-manager] 2025-05-28 18:53:33.398675 | orchestrator | 2025-05-28 18:53:33.398692 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-28 18:53:33.399895 | orchestrator | Wednesday 28 May 2025 18:53:33 +0000 (0:00:00.526) 0:00:00.628 ********* 2025-05-28 18:53:33.845887 | orchestrator | changed: [testbed-manager] 2025-05-28 18:53:33.846151 | orchestrator | 2025-05-28 18:53:33.846974 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-28 18:53:33.847539 | orchestrator | Wednesday 28 May 2025 18:53:33 +0000 (0:00:00.447) 0:00:01.076 ********* 2025-05-28 18:53:39.768233 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-28 18:53:39.768843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-28 18:53:39.770092 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-28 18:53:39.770504 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-28 18:53:39.770937 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-28 18:53:39.772491 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-28 18:53:39.772988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-28 18:53:39.774367 | orchestrator | 2025-05-28 18:53:39.774995 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-28 18:53:39.775553 | orchestrator | Wednesday 28 May 2025 18:53:39 +0000 (0:00:05.918) 0:00:06.995 ********* 2025-05-28 18:53:39.852147 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:53:39.852856 | orchestrator | 2025-05-28 18:53:39.853668 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-28 18:53:39.854111 | orchestrator | Wednesday 28 May 2025 18:53:39 +0000 (0:00:00.086) 0:00:07.081 ********* 2025-05-28 18:53:40.515784 | orchestrator | changed: [testbed-manager] 2025-05-28 18:53:40.516001 | orchestrator | 2025-05-28 18:53:40.520091 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:53:40.520358 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 18:53:40.520386 | orchestrator | 2025-05-28 18:53:40.520400 | orchestrator | Wednesday 28 May 2025 18:53:40 +0000 (0:00:00.664) 0:00:07.745 ********* 2025-05-28 18:53:40.520412 | orchestrator | =============================================================================== 2025-05-28 18:53:40.520424 | orchestrator | 2025-05-28 18:53:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 18:53:40.520436 | orchestrator | 2025-05-28 18:53:40 | INFO  | Please wait and do not abort execution. 2025-05-28 18:53:40.520461 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.92s 2025-05-28 18:53:40.520831 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.66s 2025-05-28 18:53:40.521536 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2025-05-28 18:53:40.522342 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2025-05-28 18:53:40.522686 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2025-05-28 18:53:41.015314 | orchestrator | + osism apply known-hosts 2025-05-28 18:53:42.533550 | orchestrator | 2025-05-28 18:53:42 | INFO  | Task 2b6a1579-3a36-4ce6-b7ab-bc1eaa8272bb (known-hosts) was prepared for execution. 2025-05-28 18:53:42.533646 | orchestrator | 2025-05-28 18:53:42 | INFO  | It takes a moment until task 2b6a1579-3a36-4ce6-b7ab-bc1eaa8272bb (known-hosts) has been started and output is visible here. 2025-05-28 18:53:45.831802 | orchestrator | 2025-05-28 18:53:45.831920 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-28 18:53:45.832806 | orchestrator | 2025-05-28 18:53:45.832832 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-28 18:53:45.833794 | orchestrator | Wednesday 28 May 2025 18:53:45 +0000 (0:00:00.107) 0:00:00.107 ********* 2025-05-28 18:53:51.948744 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-28 18:53:51.949774 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-28 18:53:51.950928 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-28 18:53:51.952458 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-28 18:53:51.952827 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-28 18:53:51.953929 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-28 18:53:51.953966 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-28 18:53:51.954534 | orchestrator | 2025-05-28 18:53:51.954952 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-28 18:53:51.955331 | orchestrator | Wednesday 28 May 2025 18:53:51 +0000 (0:00:06.126) 0:00:06.234 ********* 2025-05-28 18:53:52.134350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-28 18:53:52.134455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-28 18:53:52.135345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-28 18:53:52.136336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-28 18:53:52.136561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-28 18:53:52.137017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-28 18:53:52.137598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-28 18:53:52.137972 | orchestrator | 2025-05-28 18:53:52.138533 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:53:52.138883 | orchestrator | Wednesday 28 May 2025 18:53:52 +0000 (0:00:00.186) 0:00:06.420 ********* 2025-05-28 18:53:53.361101 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4lV4PPFxygBUUQ8CLYP5rct01G5ZFHxa55PSLK3C2jqJIL++IETqj2qKlrIRCoCvq9UVEC067SedLWXVLko4U=) 2025-05-28 18:53:53.362479 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZGfgadPKZNR0beQeSVmSf1ThnkeXUlJHeZWgXpakG2h2thruWQyukKan2LikPOrk515IG+DVlqEZefZ45ubRPNe86MVcecdr3b5vmQzYpamZ72tDk6R6xYHdIBC4I/SQCIexGeKALDqauFv8yKv09WuQMaxZ2rT2P/sjtgUD8P2++cKPk1mLzqdUmcdayAEJ9z79Jfes6O8L7s+coR6NnrBcG7P4jsZUmZ9WIEoHlY1L7mvfvBJw18GRIkb0V0RKgpMsODc6rPHWnV/ClRc7SEXw9LYJP+/viyjVKi2HDAsdufM4M+57fgtUfPQQA510Cz+o1DHG4K6ojLZPwflgeFcrR/D+qoYSo4Fn/1HWZCF8rA5LF9u6sqhU+bHT3vIVf+9/rn2BRqBB+2qZqkrTvBhEFcnKr9GAjZvl21p6lE8iqENsPfL3H2fvEYrmX2mZDaF36KKasCkEdqslOBJ3z+17ddUzgpqtCH9joBW9H/bjb90eYQQ9OhRpgJLX/xhU=) 2025-05-28 18:53:53.362833 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE+dSmKzq7oTyqDeiNUW6cZrYDEMYMLBKhqkXimayh7z) 2025-05-28 18:53:53.363599 | orchestrator | 2025-05-28 18:53:53.365515 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:53:53.366385 | orchestrator | Wednesday 28 May 2025 18:53:53 +0000 (0:00:01.227) 0:00:07.647 ********* 2025-05-28 18:53:54.472558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt5CTv99sE4/MUfDjdvTbCfV2GrQefzcA6yypIZQ6r+MijCR0EQ3/tWKYDH1zHieRiw16UweHMy/MZD1EKCR/8mzK4G+qTdySZsdeopus3QQ3BtokDIHHnk7HoOB//m2ZUnXKlvVmXctRD7Ok25hfZevcEYwa7RCiEijnQONX17F2l0P6DJDW39fqiwYR3YUbd0aQhz2gaffKT1UaLAJh2nvVaCa7MHdMVe950ep1BcjVW8CF3uTLRCvs30yhSoOPWf6NKtsSrQZV8it7PB8vtX/0j1RTytq4tm7h9ceUQyqJ+597rilGARhpT6ta3wlJAhMVXZQ5MAeCFibbLJja7TFs8HTRlSIOtc7RshyUoeulolWzOrUb3ThvUmm32S8cbrXhdU+MPHVPDgstOsDmrmfC9KNeBflsmx7Z1vh2REzF6ppYQcn6obmTb1NG3qUUoZ9oD02ezsIhj5rtQt99PtxuS6GKIvGq6bIPZggNa6y0gjLoSxinykUmYQM6fg/c=) 2025-05-28 18:53:54.472706 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKf58RoPkawq+d8POTRopigxdJbzT1eGUahijIaAEqXG) 2025-05-28 18:53:54.472793 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNgAz12ylZxXD2JMfA+NNwQWGa0m1367jcclSyiQ0vH0T8bF8GL2+SyWx8pijHY2IN0c51KS8tBUfXXQK2UjpgE=) 2025-05-28 18:53:54.474103 | orchestrator | 2025-05-28 18:53:54.474180 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:53:54.474422 | orchestrator | Wednesday 28 May 2025 18:53:54 +0000 (0:00:01.111) 0:00:08.759 ********* 2025-05-28 18:53:55.590640 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTqDlwxtGst3IcettT7Cf9lBFdY4c+YZ4Bmj1YJHabPHH91JUutPkHjJUnp7jtdWYB8r9A4jDPcabIRVrjufu4lvjROHJUKiD/jy9Yh+JDIE/CFqr73esY/67S+H7GzYORv/urkA+gvgVYRH/iZO2Q19BP2R6GePxXsCaM84hYM7y/zGT955jaJ44VPEyGCaN1/x3paTJ4zH4dO+DqPUy4jL+iL5FBgeJ+DfxQHerO5onNmEq7/5a/0F4k4xdLbPWpk0s0ch4/I/fzMD4Vy/Yvoc7BG+DmN/eKGKNKiipKAv36Dv8Xw/WcnUv9hhD3vQLnHaPaCVDl6vpa5A9wY4Uv71ZYgC8ebhVcJdzg2yYVfv9ydMuPdgAiCf/zJAI0AK7Zwd9l0Uwhv7eGTXeoq92Yw3Hm2RECk0RK0awQvPET8NXFtEJVBdgbS9BRurcQ9xcdMF6U/i/E353pvhuqgUIDGkmXwgKtygh5QrEEgLDOSPd0p9HMls7Fnt1n7YfxZ90=) 2025-05-28 18:53:55.590753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMotYgV1H0IARkMgNCPCtpmfGCNhFS4WNaq954AWsz78sfFdMZLjpN68AjOUIjUFUoO1e6e47tD/x10YSw21zXM=) 2025-05-28 18:53:55.590771 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFhHO1zqLW7e3UvzOqNjJwCxNJwiI23Bsgck4hjAwUB8) 2025-05-28 18:53:55.590785 | orchestrator | 2025-05-28 18:53:55.590797 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:53:55.590810 | orchestrator | Wednesday 28 May 2025 18:53:55 +0000 (0:00:01.116) 0:00:09.876 ********* 2025-05-28 18:53:56.722886 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWM/tIduJdHuAEUYH4rYpt0i+ZNNEnNB+olZIfwqEHbDKWkHY8L6BK7FNYTsFYnnvV/Eg248qLOxHZL7xYXhQOHnLDMxjAdw8fwvkbBymsaK6f4xd8lwEk/+pQGCpBvarIyxZBrRBGstph+SIriize+eExXJKVZIWBVXIjeGMhozwrZgxlj80DCFhHFTPvtCXEMhn6sZCLYT3DL1BGYeoyTekzE/FwycQuDWSULGpX7qcKVVyOXg+DP5iZXloo/QCANtWP9phKDHFiUpLTL2Ybw25GzN+lFddCNNivBaU0W5HDMcfbPIp6lB7yRBiJLPMesEgRuGoAJJIhauxTHswPZA0LBDgUmPnuh7f1oRBW4ak7/a4rz72jOcwq1swRigj+q/huBMVAJHGlWu4KpdiNdXktHMNNmH4JXC5WMot0rwA9ku0JhDj6l28Ipa5wccaWmDWHQUYufFXgewzO2CMGkNzYXw6EkvHyc2WkriXy2wTIj5ApfWex6nB3Z4Wd848=) 2025-05-28 18:53:56.723024 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPFPenUcpPDmk5gHGipkcybATvpZpeoBiY+MoJERlGO) 2025-05-28 18:53:56.723131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCVPb+hL8nqD5lHsq5RDF3pmzA8UdFlt1n+XdFQRREouPzU7jpkWy1FubWplJMp9maBFtzpWBf/cRfl0qN3jI+0=) 2025-05-28 18:53:56.723490 | orchestrator | 2025-05-28 18:53:56.725653 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:53:56.725996 | orchestrator | Wednesday 28 May 2025 18:53:56 +0000 (0:00:01.133) 0:00:11.009 ********* 2025-05-28 18:53:57.890301 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt7d+xy4PvG0WzDn6XfvLqzGFCI2vD8wuNiaUYL3zqE9YlZgW9IPxaem7jBEa8T7p5t37ijomqqJkUNy3molZmRtKIkOYQACBswcnve/jx5073yZ8zLf+tYjvjiZTUBdJrEfByBevicAlR6qHgb1PgrPabv/syfFfV9bkN7Ujxf5ormIBZ9FGtijxvn02EGFwjWUPEADhzNTmlTXBeMkDCS7nVrJFpPOtGdNICKf5kiCaXd2G/0F9Vt06AQugdMmsFV74YKXRpE7Jnk6bLdpi30WQvj1tsUZLmDEGu83k1yLhOAdkb4fVp0/3GhpnuBVFzfFlSuEee1sOXu7iZ+H5WXePT21s/Lp0vIWt3Vvi9n8WQ/xgs8mFXsy76Kjo3An3ay6PrDO/ZYXw052RLCFGFT+EJUn60dmgzV30Gf4p5k3BNp4rKxo6Y3pTp2MPpsWK4EKELz25wstZojBR5s1gKVGRyumHbLqN5JhmSG2DQ8dD3xZdDC95Fwt9CoGT3ads=) 2025-05-28 18:53:57.891755 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLBpnJFttbxm8AQTLVb6ggQrmYlmUbPsolJKFVE+qyPZfV83+1j6NpGxO17Zwu09ivfswxWtxIx9iKHdZvF0B0=) 2025-05-28 18:53:57.892376 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBR9UXoW+Dl0P+xreq+NcyD4XSLMIfch4eTc/ji0krb9) 2025-05-28 18:53:57.892978 | orchestrator | 2025-05-28 18:53:57.893601 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:53:57.894799 | orchestrator | Wednesday 28 May 2025 18:53:57 +0000 (0:00:01.166) 0:00:12.175 ********* 2025-05-28 18:53:59.132796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzJowkxeXjVFZEyPm3mOyOUMe/H5Nu0Ho9CegGUMnSL7wknzhxH6NKjbv4dUin5qH78z5F8kAG+Hx6Y69Lx944qRoX9/hlpTjVuFsE1x3LdTZ/g2uk3iYkeuYrGu598bVVtgyGjmAurb1+oYzEYG3b7MJsrFeM8stWOYOwkJFNRIbf+QChR0f7sR6KyU6/QUxoOGhOEmoPuSHVPUR0ixicMwQ2t9BGtyftPqk5vvcpRKBMiRup8qrycndIOKpVx2vhecbLj8XsWpxLtbl9G8YEzinJWsqlbHCmDJvS3QCmyJRMnwd4PmUOCsLp23f5hC6dfckBUcGdAStRAG5q+I3TOe+qDvf9VuNfNa1xXvewgDHjKMLmVUuLfjjGsD3kn3BhjIMSna0Hjj47k9oi82G5Mjd3q3Bb0MrcqLFeN4Z1W9WxZDvisjOjDx0yWt8OxlNwIkpxGy8CnV/qZwEzsDh0gOhuVMwHso0SUncwc5R/vEXD0PqccfmXLuzJ81LaOzM=) 2025-05-28 18:53:59.133839 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMVFNLgQnUXtYLIri9So+sLEqyq/PenSCuUsi+wkijotAy0+5RD6nnKF8L/VlX0gVs6UdnMUWRn1twQ+k4lPIqo=) 2025-05-28 18:53:59.134553 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIW48LRyXK5LSE7KeJ8yfuW0gQcY9qi1URoArykec0Rv) 2025-05-28 18:53:59.136318 | orchestrator | 2025-05-28 18:53:59.136615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:53:59.137149 | orchestrator | Wednesday 28 May 2025 18:53:59 +0000 (0:00:01.242) 0:00:13.418 ********* 2025-05-28 18:54:00.268755 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnTzh77GiWfX/02Rm0VMfQT6qBnoEZ0Ey7WFVHG9ryTrap/AopDYjZG8pI6nt9+3v5v9k/eAwKZtm3WRanE59PDdCpoex+Sm85CDcYPpVi47vmh53y8k9Q4tuafGovMEAdBXFpdhYMlQLCe8VLas2drApGAQKyVJ4ntSJ6paAF559cDFuE2XE9CwBViKPnPx1boJlvNaygaWe1jiiklx3pZ8wacdzcwmH65JZoAM9eWWlbMjmFskst6gbKz5NonuVzUuj23Rax6xON+gpuMhoja5oteaox+1NquHbPoLTFFLGwsTIkave+G9VGsyp5SwrcKkdPDRVxUSZTRYsE+QDnOBsba6pEysMQs4ZiapJvGEMl2y77D46AogJOq4BA+zjnQh/W79n390IcytbN4FRrD/jZ8QwFyjInBgy9Y8TY6rzzXZsaqYss0TyrAOFNtxEM+3NxnTyNrviNYJvh62pYsBUvUJwEsH4xYTpBuu9G6H2IhpFKlyO7ChUmGh58TUk=) 2025-05-28 18:54:00.268971 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKsIj7gmmtKuvIokquEW+dbftkdKcvM+wwTD9bRV0vI+G/kfmb3JIMusFfhuENgteilGARIzDPH30mGrba7ryxg=) 2025-05-28 18:54:00.271002 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILFVzB6CCV2MO28ly5LZsxpq78C/aWZQYB4sAbeDCB0l) 2025-05-28 18:54:00.271516 | orchestrator | 2025-05-28 18:54:00.271878 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-28 18:54:00.272314 | orchestrator | Wednesday 28 May 2025 18:54:00 +0000 (0:00:01.136) 0:00:14.554 ********* 2025-05-28 18:54:06.001589 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-28 18:54:06.002331 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-28 18:54:06.002672 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-28 18:54:06.003471 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-28 18:54:06.003840 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-28 18:54:06.004590 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-28 18:54:06.006431 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-28 18:54:06.006630 | orchestrator | 2025-05-28 18:54:06.007105 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-28 18:54:06.007488 | orchestrator | Wednesday 28 May 2025 18:54:05 +0000 (0:00:05.729) 0:00:20.284 ********* 2025-05-28 18:54:06.211016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-28 18:54:06.211155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-28 18:54:06.211171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-28 18:54:06.211217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-28 18:54:06.211230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-28 18:54:06.213682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-28 18:54:06.214252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-28 18:54:06.214568 | orchestrator | 2025-05-28 18:54:06.214882 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:54:06.215426 | orchestrator | Wednesday 28 May 2025 18:54:06 +0000 (0:00:00.211) 0:00:20.495 ********* 2025-05-28 18:54:07.349902 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE+dSmKzq7oTyqDeiNUW6cZrYDEMYMLBKhqkXimayh7z) 2025-05-28 18:54:07.350992 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZGfgadPKZNR0beQeSVmSf1ThnkeXUlJHeZWgXpakG2h2thruWQyukKan2LikPOrk515IG+DVlqEZefZ45ubRPNe86MVcecdr3b5vmQzYpamZ72tDk6R6xYHdIBC4I/SQCIexGeKALDqauFv8yKv09WuQMaxZ2rT2P/sjtgUD8P2++cKPk1mLzqdUmcdayAEJ9z79Jfes6O8L7s+coR6NnrBcG7P4jsZUmZ9WIEoHlY1L7mvfvBJw18GRIkb0V0RKgpMsODc6rPHWnV/ClRc7SEXw9LYJP+/viyjVKi2HDAsdufM4M+57fgtUfPQQA510Cz+o1DHG4K6ojLZPwflgeFcrR/D+qoYSo4Fn/1HWZCF8rA5LF9u6sqhU+bHT3vIVf+9/rn2BRqBB+2qZqkrTvBhEFcnKr9GAjZvl21p6lE8iqENsPfL3H2fvEYrmX2mZDaF36KKasCkEdqslOBJ3z+17ddUzgpqtCH9joBW9H/bjb90eYQQ9OhRpgJLX/xhU=) 2025-05-28 18:54:07.351055 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4lV4PPFxygBUUQ8CLYP5rct01G5ZFHxa55PSLK3C2jqJIL++IETqj2qKlrIRCoCvq9UVEC067SedLWXVLko4U=) 2025-05-28 18:54:07.351073 | orchestrator | 2025-05-28 18:54:07.351319 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:54:07.351979 | orchestrator | Wednesday 28 May 2025 18:54:07 +0000 (0:00:01.140) 0:00:21.636 ********* 2025-05-28 18:54:08.457576 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKf58RoPkawq+d8POTRopigxdJbzT1eGUahijIaAEqXG) 2025-05-28 18:54:08.457683 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt5CTv99sE4/MUfDjdvTbCfV2GrQefzcA6yypIZQ6r+MijCR0EQ3/tWKYDH1zHieRiw16UweHMy/MZD1EKCR/8mzK4G+qTdySZsdeopus3QQ3BtokDIHHnk7HoOB//m2ZUnXKlvVmXctRD7Ok25hfZevcEYwa7RCiEijnQONX17F2l0P6DJDW39fqiwYR3YUbd0aQhz2gaffKT1UaLAJh2nvVaCa7MHdMVe950ep1BcjVW8CF3uTLRCvs30yhSoOPWf6NKtsSrQZV8it7PB8vtX/0j1RTytq4tm7h9ceUQyqJ+597rilGARhpT6ta3wlJAhMVXZQ5MAeCFibbLJja7TFs8HTRlSIOtc7RshyUoeulolWzOrUb3ThvUmm32S8cbrXhdU+MPHVPDgstOsDmrmfC9KNeBflsmx7Z1vh2REzF6ppYQcn6obmTb1NG3qUUoZ9oD02ezsIhj5rtQt99PtxuS6GKIvGq6bIPZggNa6y0gjLoSxinykUmYQM6fg/c=) 2025-05-28 18:54:08.457838 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNgAz12ylZxXD2JMfA+NNwQWGa0m1367jcclSyiQ0vH0T8bF8GL2+SyWx8pijHY2IN0c51KS8tBUfXXQK2UjpgE=) 2025-05-28 18:54:08.460887 | orchestrator | 2025-05-28 18:54:08.460914 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:54:08.461304 | orchestrator | Wednesday 28 May 2025 18:54:08 +0000 (0:00:01.107) 0:00:22.743 ********* 2025-05-28 18:54:09.548538 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMotYgV1H0IARkMgNCPCtpmfGCNhFS4WNaq954AWsz78sfFdMZLjpN68AjOUIjUFUoO1e6e47tD/x10YSw21zXM=) 2025-05-28 18:54:09.548647 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTqDlwxtGst3IcettT7Cf9lBFdY4c+YZ4Bmj1YJHabPHH91JUutPkHjJUnp7jtdWYB8r9A4jDPcabIRVrjufu4lvjROHJUKiD/jy9Yh+JDIE/CFqr73esY/67S+H7GzYORv/urkA+gvgVYRH/iZO2Q19BP2R6GePxXsCaM84hYM7y/zGT955jaJ44VPEyGCaN1/x3paTJ4zH4dO+DqPUy4jL+iL5FBgeJ+DfxQHerO5onNmEq7/5a/0F4k4xdLbPWpk0s0ch4/I/fzMD4Vy/Yvoc7BG+DmN/eKGKNKiipKAv36Dv8Xw/WcnUv9hhD3vQLnHaPaCVDl6vpa5A9wY4Uv71ZYgC8ebhVcJdzg2yYVfv9ydMuPdgAiCf/zJAI0AK7Zwd9l0Uwhv7eGTXeoq92Yw3Hm2RECk0RK0awQvPET8NXFtEJVBdgbS9BRurcQ9xcdMF6U/i/E353pvhuqgUIDGkmXwgKtygh5QrEEgLDOSPd0p9HMls7Fnt1n7YfxZ90=) 2025-05-28 18:54:09.548666 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFhHO1zqLW7e3UvzOqNjJwCxNJwiI23Bsgck4hjAwUB8) 2025-05-28 18:54:09.548680 | orchestrator | 2025-05-28 18:54:09.549278 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:54:09.549304 | orchestrator | Wednesday 28 May 2025 18:54:09 +0000 (0:00:01.091) 0:00:23.834 ********* 2025-05-28 18:54:10.640398 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWM/tIduJdHuAEUYH4rYpt0i+ZNNEnNB+olZIfwqEHbDKWkHY8L6BK7FNYTsFYnnvV/Eg248qLOxHZL7xYXhQOHnLDMxjAdw8fwvkbBymsaK6f4xd8lwEk/+pQGCpBvarIyxZBrRBGstph+SIriize+eExXJKVZIWBVXIjeGMhozwrZgxlj80DCFhHFTPvtCXEMhn6sZCLYT3DL1BGYeoyTekzE/FwycQuDWSULGpX7qcKVVyOXg+DP5iZXloo/QCANtWP9phKDHFiUpLTL2Ybw25GzN+lFddCNNivBaU0W5HDMcfbPIp6lB7yRBiJLPMesEgRuGoAJJIhauxTHswPZA0LBDgUmPnuh7f1oRBW4ak7/a4rz72jOcwq1swRigj+q/huBMVAJHGlWu4KpdiNdXktHMNNmH4JXC5WMot0rwA9ku0JhDj6l28Ipa5wccaWmDWHQUYufFXgewzO2CMGkNzYXw6EkvHyc2WkriXy2wTIj5ApfWex6nB3Z4Wd848=) 2025-05-28 18:54:10.642431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCVPb+hL8nqD5lHsq5RDF3pmzA8UdFlt1n+XdFQRREouPzU7jpkWy1FubWplJMp9maBFtzpWBf/cRfl0qN3jI+0=) 2025-05-28 18:54:10.642671 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPFPenUcpPDmk5gHGipkcybATvpZpeoBiY+MoJERlGO) 2025-05-28 18:54:10.643721 | orchestrator | 2025-05-28 18:54:10.644735 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:54:10.645465 | orchestrator | Wednesday 28 May 2025 18:54:10 +0000 (0:00:01.092) 0:00:24.926 ********* 2025-05-28 18:54:11.718624 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLBpnJFttbxm8AQTLVb6ggQrmYlmUbPsolJKFVE+qyPZfV83+1j6NpGxO17Zwu09ivfswxWtxIx9iKHdZvF0B0=) 2025-05-28 18:54:11.718842 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt7d+xy4PvG0WzDn6XfvLqzGFCI2vD8wuNiaUYL3zqE9YlZgW9IPxaem7jBEa8T7p5t37ijomqqJkUNy3molZmRtKIkOYQACBswcnve/jx5073yZ8zLf+tYjvjiZTUBdJrEfByBevicAlR6qHgb1PgrPabv/syfFfV9bkN7Ujxf5ormIBZ9FGtijxvn02EGFwjWUPEADhzNTmlTXBeMkDCS7nVrJFpPOtGdNICKf5kiCaXd2G/0F9Vt06AQugdMmsFV74YKXRpE7Jnk6bLdpi30WQvj1tsUZLmDEGu83k1yLhOAdkb4fVp0/3GhpnuBVFzfFlSuEee1sOXu7iZ+H5WXePT21s/Lp0vIWt3Vvi9n8WQ/xgs8mFXsy76Kjo3An3ay6PrDO/ZYXw052RLCFGFT+EJUn60dmgzV30Gf4p5k3BNp4rKxo6Y3pTp2MPpsWK4EKELz25wstZojBR5s1gKVGRyumHbLqN5JhmSG2DQ8dD3xZdDC95Fwt9CoGT3ads=) 2025-05-28 18:54:11.719528 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBR9UXoW+Dl0P+xreq+NcyD4XSLMIfch4eTc/ji0krb9) 2025-05-28 18:54:11.720287 | orchestrator | 2025-05-28 18:54:11.720807 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:54:11.722068 | orchestrator | Wednesday 28 May 2025 18:54:11 +0000 (0:00:01.076) 0:00:26.003 ********* 2025-05-28 18:54:12.866255 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMVFNLgQnUXtYLIri9So+sLEqyq/PenSCuUsi+wkijotAy0+5RD6nnKF8L/VlX0gVs6UdnMUWRn1twQ+k4lPIqo=) 2025-05-28 18:54:12.866827 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzJowkxeXjVFZEyPm3mOyOUMe/H5Nu0Ho9CegGUMnSL7wknzhxH6NKjbv4dUin5qH78z5F8kAG+Hx6Y69Lx944qRoX9/hlpTjVuFsE1x3LdTZ/g2uk3iYkeuYrGu598bVVtgyGjmAurb1+oYzEYG3b7MJsrFeM8stWOYOwkJFNRIbf+QChR0f7sR6KyU6/QUxoOGhOEmoPuSHVPUR0ixicMwQ2t9BGtyftPqk5vvcpRKBMiRup8qrycndIOKpVx2vhecbLj8XsWpxLtbl9G8YEzinJWsqlbHCmDJvS3QCmyJRMnwd4PmUOCsLp23f5hC6dfckBUcGdAStRAG5q+I3TOe+qDvf9VuNfNa1xXvewgDHjKMLmVUuLfjjGsD3kn3BhjIMSna0Hjj47k9oi82G5Mjd3q3Bb0MrcqLFeN4Z1W9WxZDvisjOjDx0yWt8OxlNwIkpxGy8CnV/qZwEzsDh0gOhuVMwHso0SUncwc5R/vEXD0PqccfmXLuzJ81LaOzM=) 2025-05-28 18:54:12.867454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIW48LRyXK5LSE7KeJ8yfuW0gQcY9qi1URoArykec0Rv) 2025-05-28 18:54:12.868234 | orchestrator | 2025-05-28 18:54:12.868869 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-28 18:54:12.869591 | orchestrator | Wednesday 28 May 2025 18:54:12 +0000 (0:00:01.148) 0:00:27.151 ********* 2025-05-28 18:54:13.953759 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILFVzB6CCV2MO28ly5LZsxpq78C/aWZQYB4sAbeDCB0l) 2025-05-28 18:54:13.953819 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnTzh77GiWfX/02Rm0VMfQT6qBnoEZ0Ey7WFVHG9ryTrap/AopDYjZG8pI6nt9+3v5v9k/eAwKZtm3WRanE59PDdCpoex+Sm85CDcYPpVi47vmh53y8k9Q4tuafGovMEAdBXFpdhYMlQLCe8VLas2drApGAQKyVJ4ntSJ6paAF559cDFuE2XE9CwBViKPnPx1boJlvNaygaWe1jiiklx3pZ8wacdzcwmH65JZoAM9eWWlbMjmFskst6gbKz5NonuVzUuj23Rax6xON+gpuMhoja5oteaox+1NquHbPoLTFFLGwsTIkave+G9VGsyp5SwrcKkdPDRVxUSZTRYsE+QDnOBsba6pEysMQs4ZiapJvGEMl2y77D46AogJOq4BA+zjnQh/W79n390IcytbN4FRrD/jZ8QwFyjInBgy9Y8TY6rzzXZsaqYss0TyrAOFNtxEM+3NxnTyNrviNYJvh62pYsBUvUJwEsH4xYTpBuu9G6H2IhpFKlyO7ChUmGh58TUk=) 2025-05-28 18:54:13.953838 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKsIj7gmmtKuvIokquEW+dbftkdKcvM+wwTD9bRV0vI+G/kfmb3JIMusFfhuENgteilGARIzDPH30mGrba7ryxg=) 2025-05-28 18:54:13.954762 | orchestrator | 2025-05-28 18:54:13.955107 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-28 18:54:13.955479 | orchestrator | Wednesday 28 May 2025 18:54:13 +0000 (0:00:01.087) 0:00:28.239 ********* 2025-05-28 18:54:14.161412 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-28 18:54:14.162116 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-28 18:54:14.162266 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-28 18:54:14.163289 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-28 18:54:14.163794 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-28 18:54:14.164665 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-28 18:54:14.165007 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-28 18:54:14.165504 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:54:14.165971 | orchestrator | 2025-05-28 18:54:14.166853 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-28 18:54:14.167120 | orchestrator | Wednesday 28 May 2025 18:54:14 +0000 (0:00:00.209) 0:00:28.448 ********* 2025-05-28 18:54:14.235241 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:54:14.236493 | orchestrator | 2025-05-28 18:54:14.237698 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-28 18:54:14.239324 | orchestrator | Wednesday 28 May 2025 18:54:14 +0000 (0:00:00.074) 0:00:28.522 ********* 2025-05-28 18:54:14.312617 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:54:14.313244 | orchestrator | 2025-05-28 18:54:14.313940 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-28 18:54:14.315034 | orchestrator | Wednesday 28 May 2025 18:54:14 +0000 (0:00:00.076) 0:00:28.599 ********* 2025-05-28 18:54:14.958290 | orchestrator | changed: [testbed-manager] 2025-05-28 18:54:14.958392 | orchestrator | 2025-05-28 18:54:14.960151 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:54:14.960862 | orchestrator | 2025-05-28 18:54:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 18:54:14.960897 | orchestrator | 2025-05-28 18:54:14 | INFO  | Please wait and do not abort execution. 2025-05-28 18:54:14.961942 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 18:54:14.963477 | orchestrator | 2025-05-28 18:54:14.964546 | orchestrator | Wednesday 28 May 2025 18:54:14 +0000 (0:00:00.645) 0:00:29.245 ********* 2025-05-28 18:54:14.965810 | orchestrator | =============================================================================== 2025-05-28 18:54:14.966991 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.13s 2025-05-28 18:54:14.967366 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.73s 2025-05-28 18:54:14.968254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-05-28 18:54:14.969986 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-05-28 18:54:14.970442 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-05-28 18:54:14.970908 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-05-28 18:54:14.971283 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-28 18:54:14.972439 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-28 18:54:14.972764 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-05-28 18:54:14.973673 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-28 18:54:14.974205 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-28 18:54:14.974619 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-28 18:54:14.975222 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-28 18:54:14.975518 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-28 18:54:14.976281 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-28 18:54:14.976547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-28 18:54:14.977076 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.65s 2025-05-28 18:54:14.977868 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2025-05-28 18:54:14.978630 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.21s 2025-05-28 18:54:14.979019 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-05-28 18:54:15.447202 | orchestrator | + osism apply squid 2025-05-28 18:54:17.051068 | orchestrator | 2025-05-28 18:54:17 | INFO  | Task 2aecf26e-265c-4649-adbd-290edaf23991 (squid) was prepared for execution. 2025-05-28 18:54:17.051165 | orchestrator | 2025-05-28 18:54:17 | INFO  | It takes a moment until task 2aecf26e-265c-4649-adbd-290edaf23991 (squid) has been started and output is visible here. 2025-05-28 18:54:20.153155 | orchestrator | 2025-05-28 18:54:20.153246 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-28 18:54:20.155155 | orchestrator | 2025-05-28 18:54:20.155873 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-28 18:54:20.157443 | orchestrator | Wednesday 28 May 2025 18:54:20 +0000 (0:00:00.119) 0:00:00.119 ********* 2025-05-28 18:54:20.239420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-28 18:54:20.239914 | orchestrator | 2025-05-28 18:54:20.240350 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-28 18:54:20.241236 | orchestrator | Wednesday 28 May 2025 18:54:20 +0000 (0:00:00.095) 0:00:00.214 ********* 2025-05-28 18:54:21.923918 | orchestrator | ok: [testbed-manager] 2025-05-28 18:54:21.925251 | orchestrator | 2025-05-28 18:54:21.925714 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-28 18:54:21.926251 | orchestrator | Wednesday 28 May 2025 18:54:21 +0000 (0:00:01.677) 0:00:01.892 ********* 2025-05-28 18:54:23.193298 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-28 18:54:23.194520 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-28 18:54:23.194559 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-28 18:54:23.195280 | orchestrator | 2025-05-28 18:54:23.196146 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-28 18:54:23.196371 | orchestrator | Wednesday 28 May 2025 18:54:23 +0000 (0:00:01.274) 0:00:03.167 ********* 2025-05-28 18:54:24.396283 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-28 18:54:24.396489 | orchestrator | 2025-05-28 18:54:24.397274 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-28 18:54:24.397662 | orchestrator | Wednesday 28 May 2025 18:54:24 +0000 (0:00:01.202) 0:00:04.369 ********* 2025-05-28 18:54:24.738677 | orchestrator | ok: [testbed-manager] 2025-05-28 18:54:24.739271 | orchestrator | 2025-05-28 18:54:24.740245 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-28 18:54:24.741031 | orchestrator | Wednesday 28 May 2025 18:54:24 +0000 (0:00:00.343) 0:00:04.713 ********* 2025-05-28 18:54:25.785376 | orchestrator | changed: [testbed-manager] 2025-05-28 18:54:25.787244 | orchestrator | 2025-05-28 18:54:25.787298 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-28 18:54:25.788593 | orchestrator | Wednesday 28 May 2025 18:54:25 +0000 (0:00:01.045) 0:00:05.759 ********* 2025-05-28 18:54:58.323895 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-28 18:54:58.324033 | orchestrator | ok: [testbed-manager] 2025-05-28 18:54:58.324056 | orchestrator | 2025-05-28 18:54:58.324069 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-28 18:54:58.325401 | orchestrator | Wednesday 28 May 2025 18:54:58 +0000 (0:00:32.536) 0:00:38.296 ********* 2025-05-28 18:55:10.770692 | orchestrator | changed: [testbed-manager] 2025-05-28 18:55:10.770814 | orchestrator | 2025-05-28 18:55:10.773673 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-28 18:55:10.773767 | orchestrator | Wednesday 28 May 2025 18:55:10 +0000 (0:00:12.443) 0:00:50.739 ********* 2025-05-28 18:56:10.860986 | orchestrator | Pausing for 60 seconds 2025-05-28 18:56:10.861100 | orchestrator | changed: [testbed-manager] 2025-05-28 18:56:10.861118 | orchestrator | 2025-05-28 18:56:10.861254 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-28 18:56:10.861692 | orchestrator | Wednesday 28 May 2025 18:56:10 +0000 (0:01:00.091) 0:01:50.830 ********* 2025-05-28 18:56:10.920691 | orchestrator | ok: [testbed-manager] 2025-05-28 18:56:10.921270 | orchestrator | 2025-05-28 18:56:10.922180 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-28 18:56:10.923171 | orchestrator | Wednesday 28 May 2025 18:56:10 +0000 (0:00:00.065) 0:01:50.896 ********* 2025-05-28 18:56:11.590272 | orchestrator | changed: [testbed-manager] 2025-05-28 18:56:11.590864 | orchestrator | 2025-05-28 18:56:11.592048 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:56:11.592906 | orchestrator | 2025-05-28 18:56:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 18:56:11.593100 | orchestrator | 2025-05-28 18:56:11 | INFO  | Please wait and do not abort execution. 2025-05-28 18:56:11.594619 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 18:56:11.595379 | orchestrator | 2025-05-28 18:56:11.596315 | orchestrator | Wednesday 28 May 2025 18:56:11 +0000 (0:00:00.667) 0:01:51.564 ********* 2025-05-28 18:56:11.596602 | orchestrator | =============================================================================== 2025-05-28 18:56:11.597348 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-05-28 18:56:11.597702 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.54s 2025-05-28 18:56:11.598406 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.44s 2025-05-28 18:56:11.598730 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.68s 2025-05-28 18:56:11.599226 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.27s 2025-05-28 18:56:11.599598 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.20s 2025-05-28 18:56:11.600459 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.05s 2025-05-28 18:56:11.600712 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2025-05-28 18:56:11.601183 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-05-28 18:56:11.602419 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-05-28 18:56:11.602582 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-28 18:56:12.043846 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-28 18:56:12.043949 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-28 18:56:12.048662 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-28 18:56:12.105475 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-28 18:56:12.105547 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-28 18:56:12.105562 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-28 18:56:12.110696 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-28 18:56:12.116422 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-28 18:56:12.123878 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-28 18:56:13.573162 | orchestrator | 2025-05-28 18:56:13 | INFO  | Task 813c2497-996e-4f3d-9ad6-99ed1b1c1e6f (operator) was prepared for execution. 2025-05-28 18:56:13.573252 | orchestrator | 2025-05-28 18:56:13 | INFO  | It takes a moment until task 813c2497-996e-4f3d-9ad6-99ed1b1c1e6f (operator) has been started and output is visible here. 2025-05-28 18:56:16.641854 | orchestrator | 2025-05-28 18:56:16.642424 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-28 18:56:16.643412 | orchestrator | 2025-05-28 18:56:16.644202 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-28 18:56:16.644655 | orchestrator | Wednesday 28 May 2025 18:56:16 +0000 (0:00:00.092) 0:00:00.092 ********* 2025-05-28 18:56:20.036929 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:56:20.037095 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:56:20.037619 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:20.038792 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:20.038832 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:56:20.042362 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:20.042738 | orchestrator | 2025-05-28 18:56:20.043428 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-28 18:56:20.043764 | orchestrator | Wednesday 28 May 2025 18:56:20 +0000 (0:00:03.394) 0:00:03.486 ********* 2025-05-28 18:56:20.853993 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:56:20.855187 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:20.855275 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:56:20.855360 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:20.857189 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:56:20.857915 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:20.858074 | orchestrator | 2025-05-28 18:56:20.858652 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-28 18:56:20.860885 | orchestrator | 2025-05-28 18:56:20.861616 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-28 18:56:20.861976 | orchestrator | Wednesday 28 May 2025 18:56:20 +0000 (0:00:00.819) 0:00:04.306 ********* 2025-05-28 18:56:20.920947 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:56:20.956222 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:56:20.983997 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:56:21.028848 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:21.029925 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:21.032753 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:21.032776 | orchestrator | 2025-05-28 18:56:21.032791 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-28 18:56:21.034989 | orchestrator | Wednesday 28 May 2025 18:56:21 +0000 (0:00:00.175) 0:00:04.481 ********* 2025-05-28 18:56:21.090732 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:56:21.132438 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:56:21.158345 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:56:21.200867 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:21.201900 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:21.205660 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:21.206375 | orchestrator | 2025-05-28 18:56:21.207828 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-28 18:56:21.208804 | orchestrator | Wednesday 28 May 2025 18:56:21 +0000 (0:00:00.172) 0:00:04.653 ********* 2025-05-28 18:56:21.853352 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:21.853546 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:21.855020 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:21.858346 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:21.858371 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:21.858796 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:21.858817 | orchestrator | 2025-05-28 18:56:21.860299 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-28 18:56:21.861682 | orchestrator | Wednesday 28 May 2025 18:56:21 +0000 (0:00:00.649) 0:00:05.303 ********* 2025-05-28 18:56:22.699213 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:22.699369 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:22.699803 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:22.700876 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:22.702518 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:22.703750 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:22.707105 | orchestrator | 2025-05-28 18:56:22.707300 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-28 18:56:22.707800 | orchestrator | Wednesday 28 May 2025 18:56:22 +0000 (0:00:00.845) 0:00:06.149 ********* 2025-05-28 18:56:23.999521 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-28 18:56:23.999672 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-28 18:56:23.999705 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-28 18:56:24.000614 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-28 18:56:24.001155 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-28 18:56:24.006149 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-28 18:56:24.006164 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-28 18:56:24.006171 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-28 18:56:24.006702 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-28 18:56:24.007196 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-28 18:56:24.007684 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-28 18:56:24.008166 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-28 18:56:24.009301 | orchestrator | 2025-05-28 18:56:24.009321 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-28 18:56:24.009872 | orchestrator | Wednesday 28 May 2025 18:56:23 +0000 (0:00:01.300) 0:00:07.449 ********* 2025-05-28 18:56:25.368275 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:25.368416 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:25.368509 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:25.368525 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:25.369051 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:25.369297 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:25.369318 | orchestrator | 2025-05-28 18:56:25.369532 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-28 18:56:25.369924 | orchestrator | Wednesday 28 May 2025 18:56:25 +0000 (0:00:01.368) 0:00:08.818 ********* 2025-05-28 18:56:26.591926 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-28 18:56:26.592991 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-28 18:56:26.593992 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-28 18:56:26.758993 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 18:56:26.759146 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 18:56:26.759222 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 18:56:26.759626 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 18:56:26.760016 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 18:56:26.760413 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-28 18:56:26.761184 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-28 18:56:26.761571 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-28 18:56:26.762077 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-28 18:56:26.762493 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-28 18:56:26.763069 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-28 18:56:26.763381 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-28 18:56:26.766943 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-28 18:56:26.767184 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-28 18:56:26.767529 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-28 18:56:26.767949 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-28 18:56:26.768232 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-28 18:56:26.768566 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-28 18:56:26.769165 | orchestrator | 2025-05-28 18:56:26.769804 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-28 18:56:26.770096 | orchestrator | Wednesday 28 May 2025 18:56:26 +0000 (0:00:01.390) 0:00:10.209 ********* 2025-05-28 18:56:27.436351 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:27.436471 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:27.437437 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:27.438843 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:27.440772 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:27.441757 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:27.442566 | orchestrator | 2025-05-28 18:56:27.443340 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-28 18:56:27.444146 | orchestrator | Wednesday 28 May 2025 18:56:27 +0000 (0:00:00.675) 0:00:10.885 ********* 2025-05-28 18:56:27.515526 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:56:27.532450 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:56:27.555312 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:56:27.600378 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:56:27.600743 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:56:27.604485 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:56:27.605109 | orchestrator | 2025-05-28 18:56:27.605596 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-28 18:56:27.609942 | orchestrator | Wednesday 28 May 2025 18:56:27 +0000 (0:00:00.167) 0:00:11.052 ********* 2025-05-28 18:56:28.313734 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 18:56:28.316857 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 18:56:28.317629 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:28.319192 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-28 18:56:28.321196 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 18:56:28.322178 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:28.323088 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:28.325237 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:28.326093 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-28 18:56:28.327052 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:28.327922 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 18:56:28.328354 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:28.329555 | orchestrator | 2025-05-28 18:56:28.334856 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-28 18:56:28.335518 | orchestrator | Wednesday 28 May 2025 18:56:28 +0000 (0:00:00.711) 0:00:11.764 ********* 2025-05-28 18:56:28.375177 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:56:28.392789 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:56:28.412801 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:56:28.452355 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:56:28.452473 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:56:28.452867 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:56:28.454109 | orchestrator | 2025-05-28 18:56:28.455453 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-28 18:56:28.455542 | orchestrator | Wednesday 28 May 2025 18:56:28 +0000 (0:00:00.140) 0:00:11.904 ********* 2025-05-28 18:56:28.492466 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:56:28.522581 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:56:28.538155 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:56:28.592714 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:56:28.592813 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:56:28.596293 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:56:28.596491 | orchestrator | 2025-05-28 18:56:28.596695 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-28 18:56:28.596992 | orchestrator | Wednesday 28 May 2025 18:56:28 +0000 (0:00:00.139) 0:00:12.044 ********* 2025-05-28 18:56:28.646355 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:56:28.669739 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:56:28.692384 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:56:28.717783 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:56:28.748278 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:56:28.748695 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:56:28.749097 | orchestrator | 2025-05-28 18:56:28.751252 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-28 18:56:28.751511 | orchestrator | Wednesday 28 May 2025 18:56:28 +0000 (0:00:00.157) 0:00:12.201 ********* 2025-05-28 18:56:29.457466 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:29.458396 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:29.459669 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:29.460929 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:29.461724 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:29.463163 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:29.464002 | orchestrator | 2025-05-28 18:56:29.464882 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-28 18:56:29.465750 | orchestrator | Wednesday 28 May 2025 18:56:29 +0000 (0:00:00.705) 0:00:12.907 ********* 2025-05-28 18:56:29.590235 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:56:29.631448 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:56:29.737103 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:56:29.738929 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:56:29.740525 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:56:29.741448 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:56:29.742896 | orchestrator | 2025-05-28 18:56:29.744425 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:56:29.744707 | orchestrator | 2025-05-28 18:56:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 18:56:29.745786 | orchestrator | 2025-05-28 18:56:29 | INFO  | Please wait and do not abort execution. 2025-05-28 18:56:29.746775 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 18:56:29.747656 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 18:56:29.748501 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 18:56:29.749172 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 18:56:29.750226 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 18:56:29.750708 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 18:56:29.751632 | orchestrator | 2025-05-28 18:56:29.752924 | orchestrator | Wednesday 28 May 2025 18:56:29 +0000 (0:00:00.282) 0:00:13.189 ********* 2025-05-28 18:56:29.752944 | orchestrator | =============================================================================== 2025-05-28 18:56:29.753658 | orchestrator | Gathering Facts --------------------------------------------------------- 3.39s 2025-05-28 18:56:29.754365 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.39s 2025-05-28 18:56:29.755335 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.37s 2025-05-28 18:56:29.756252 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.30s 2025-05-28 18:56:29.756720 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.85s 2025-05-28 18:56:29.757438 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2025-05-28 18:56:29.758144 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-05-28 18:56:29.758789 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2025-05-28 18:56:29.759563 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.68s 2025-05-28 18:56:29.760299 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2025-05-28 18:56:29.760732 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2025-05-28 18:56:29.761497 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-05-28 18:56:29.762237 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-05-28 18:56:29.762757 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-05-28 18:56:29.763500 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-28 18:56:29.763990 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-28 18:56:29.764555 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-05-28 18:56:30.411894 | orchestrator | + osism apply --environment custom facts 2025-05-28 18:56:31.807789 | orchestrator | 2025-05-28 18:56:31 | INFO  | Trying to run play facts in environment custom 2025-05-28 18:56:31.855955 | orchestrator | 2025-05-28 18:56:31 | INFO  | Task 15fc52ae-4ab2-4906-8689-3ffe953ef7f1 (facts) was prepared for execution. 2025-05-28 18:56:31.857952 | orchestrator | 2025-05-28 18:56:31 | INFO  | It takes a moment until task 15fc52ae-4ab2-4906-8689-3ffe953ef7f1 (facts) has been started and output is visible here. 2025-05-28 18:56:34.896391 | orchestrator | 2025-05-28 18:56:34.897441 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-28 18:56:34.898999 | orchestrator | 2025-05-28 18:56:34.899031 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-28 18:56:34.901049 | orchestrator | Wednesday 28 May 2025 18:56:34 +0000 (0:00:00.099) 0:00:00.099 ********* 2025-05-28 18:56:36.150295 | orchestrator | ok: [testbed-manager] 2025-05-28 18:56:37.278202 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:37.279189 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:37.279238 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:37.279251 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:37.281270 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:37.281634 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:37.282982 | orchestrator | 2025-05-28 18:56:37.283744 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-28 18:56:37.284762 | orchestrator | Wednesday 28 May 2025 18:56:37 +0000 (0:00:02.382) 0:00:02.482 ********* 2025-05-28 18:56:38.451283 | orchestrator | ok: [testbed-manager] 2025-05-28 18:56:39.294311 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:39.296411 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:56:39.297249 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:39.298228 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:39.298509 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:56:39.299472 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:56:39.300258 | orchestrator | 2025-05-28 18:56:39.301199 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-28 18:56:39.301692 | orchestrator | 2025-05-28 18:56:39.302427 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-28 18:56:39.303616 | orchestrator | Wednesday 28 May 2025 18:56:39 +0000 (0:00:02.017) 0:00:04.499 ********* 2025-05-28 18:56:39.387917 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:39.444495 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:39.445564 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:39.447179 | orchestrator | 2025-05-28 18:56:39.449050 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-28 18:56:39.449604 | orchestrator | Wednesday 28 May 2025 18:56:39 +0000 (0:00:00.150) 0:00:04.649 ********* 2025-05-28 18:56:39.578816 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:39.582850 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:39.585184 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:39.585226 | orchestrator | 2025-05-28 18:56:39.586340 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-28 18:56:39.586737 | orchestrator | Wednesday 28 May 2025 18:56:39 +0000 (0:00:00.133) 0:00:04.783 ********* 2025-05-28 18:56:39.707870 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:39.708326 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:39.708374 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:39.708762 | orchestrator | 2025-05-28 18:56:39.709316 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-28 18:56:39.709671 | orchestrator | Wednesday 28 May 2025 18:56:39 +0000 (0:00:00.130) 0:00:04.914 ********* 2025-05-28 18:56:39.882363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 18:56:39.883314 | orchestrator | 2025-05-28 18:56:39.883422 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-28 18:56:39.883819 | orchestrator | Wednesday 28 May 2025 18:56:39 +0000 (0:00:00.174) 0:00:05.088 ********* 2025-05-28 18:56:40.280152 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:40.280332 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:40.280540 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:40.280814 | orchestrator | 2025-05-28 18:56:40.281562 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-28 18:56:40.282165 | orchestrator | Wednesday 28 May 2025 18:56:40 +0000 (0:00:00.397) 0:00:05.485 ********* 2025-05-28 18:56:40.400163 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:56:40.400284 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:56:40.400774 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:56:40.400901 | orchestrator | 2025-05-28 18:56:40.401804 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-28 18:56:40.401817 | orchestrator | Wednesday 28 May 2025 18:56:40 +0000 (0:00:00.120) 0:00:05.606 ********* 2025-05-28 18:56:41.330630 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:41.330739 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:41.334711 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:41.334740 | orchestrator | 2025-05-28 18:56:41.336516 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-28 18:56:41.337219 | orchestrator | Wednesday 28 May 2025 18:56:41 +0000 (0:00:00.929) 0:00:06.535 ********* 2025-05-28 18:56:41.762716 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:56:41.763029 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:56:41.763622 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:56:41.764359 | orchestrator | 2025-05-28 18:56:41.768831 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-28 18:56:41.769390 | orchestrator | Wednesday 28 May 2025 18:56:41 +0000 (0:00:00.434) 0:00:06.969 ********* 2025-05-28 18:56:42.769865 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:42.770339 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:42.770914 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:42.771580 | orchestrator | 2025-05-28 18:56:42.771864 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-28 18:56:42.774742 | orchestrator | Wednesday 28 May 2025 18:56:42 +0000 (0:00:01.005) 0:00:07.975 ********* 2025-05-28 18:56:55.682604 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:56:55.682726 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:56:55.682742 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:56:55.682754 | orchestrator | 2025-05-28 18:56:55.682827 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-28 18:56:55.685406 | orchestrator | Wednesday 28 May 2025 18:56:55 +0000 (0:00:12.910) 0:00:20.885 ********* 2025-05-28 18:56:55.795249 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:56:55.795421 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:56:55.799691 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:56:55.800353 | orchestrator | 2025-05-28 18:56:55.801045 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-28 18:56:55.801692 | orchestrator | Wednesday 28 May 2025 18:56:55 +0000 (0:00:00.116) 0:00:21.002 ********* 2025-05-28 18:57:02.867935 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:57:02.868604 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:57:02.868634 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:57:02.870434 | orchestrator | 2025-05-28 18:57:02.871527 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-28 18:57:02.872165 | orchestrator | Wednesday 28 May 2025 18:57:02 +0000 (0:00:07.070) 0:00:28.072 ********* 2025-05-28 18:57:03.285813 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:03.285913 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:03.286437 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:03.286901 | orchestrator | 2025-05-28 18:57:03.288398 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-28 18:57:03.289683 | orchestrator | Wednesday 28 May 2025 18:57:03 +0000 (0:00:00.418) 0:00:28.491 ********* 2025-05-28 18:57:06.881258 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-28 18:57:06.881439 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-28 18:57:06.882192 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-28 18:57:06.883763 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-28 18:57:06.883803 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-28 18:57:06.885343 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-28 18:57:06.886314 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-28 18:57:06.886864 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-28 18:57:06.887337 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-28 18:57:06.888301 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-28 18:57:06.888633 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-28 18:57:06.889142 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-28 18:57:06.889829 | orchestrator | 2025-05-28 18:57:06.890315 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-28 18:57:06.890843 | orchestrator | Wednesday 28 May 2025 18:57:06 +0000 (0:00:03.594) 0:00:32.085 ********* 2025-05-28 18:57:08.014611 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:08.014726 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:08.015008 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:08.016376 | orchestrator | 2025-05-28 18:57:08.017600 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 18:57:08.022140 | orchestrator | 2025-05-28 18:57:08.022221 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 18:57:08.022667 | orchestrator | Wednesday 28 May 2025 18:57:08 +0000 (0:00:01.133) 0:00:33.219 ********* 2025-05-28 18:57:09.807168 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:13.072856 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:13.073752 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:13.073799 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:13.073975 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:13.075151 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:13.075376 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:13.075883 | orchestrator | 2025-05-28 18:57:13.076486 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 18:57:13.076837 | orchestrator | 2025-05-28 18:57:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 18:57:13.076861 | orchestrator | 2025-05-28 18:57:13 | INFO  | Please wait and do not abort execution. 2025-05-28 18:57:13.077506 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 18:57:13.078266 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 18:57:13.078537 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 18:57:13.079531 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 18:57:13.080429 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 18:57:13.081016 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 18:57:13.081820 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 18:57:13.082253 | orchestrator | 2025-05-28 18:57:13.082881 | orchestrator | Wednesday 28 May 2025 18:57:13 +0000 (0:00:05.058) 0:00:38.278 ********* 2025-05-28 18:57:13.083570 | orchestrator | =============================================================================== 2025-05-28 18:57:13.084319 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.91s 2025-05-28 18:57:13.085060 | orchestrator | Install required packages (Debian) -------------------------------------- 7.07s 2025-05-28 18:57:13.085840 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.06s 2025-05-28 18:57:13.086169 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2025-05-28 18:57:13.086409 | orchestrator | Create custom facts directory ------------------------------------------- 2.38s 2025-05-28 18:57:13.086686 | orchestrator | Copy fact file ---------------------------------------------------------- 2.02s 2025-05-28 18:57:13.087010 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.13s 2025-05-28 18:57:13.087400 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2025-05-28 18:57:13.087651 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.93s 2025-05-28 18:57:13.087944 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2025-05-28 18:57:13.088317 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-05-28 18:57:13.088522 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2025-05-28 18:57:13.088812 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-05-28 18:57:13.089145 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.15s 2025-05-28 18:57:13.089377 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.13s 2025-05-28 18:57:13.089779 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.13s 2025-05-28 18:57:13.090606 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-05-28 18:57:13.091004 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-05-28 18:57:13.568891 | orchestrator | + osism apply bootstrap 2025-05-28 18:57:14.993146 | orchestrator | 2025-05-28 18:57:14 | INFO  | Task c99b4f71-708c-46f5-9408-07c318253320 (bootstrap) was prepared for execution. 2025-05-28 18:57:14.993220 | orchestrator | 2025-05-28 18:57:14 | INFO  | It takes a moment until task c99b4f71-708c-46f5-9408-07c318253320 (bootstrap) has been started and output is visible here. 2025-05-28 18:57:18.190735 | orchestrator | 2025-05-28 18:57:18.191786 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-28 18:57:18.191841 | orchestrator | 2025-05-28 18:57:18.191862 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-28 18:57:18.191881 | orchestrator | Wednesday 28 May 2025 18:57:18 +0000 (0:00:00.118) 0:00:00.118 ********* 2025-05-28 18:57:18.264499 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:18.302265 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:18.331748 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:18.351529 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:18.438275 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:18.439841 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:18.442791 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:18.442847 | orchestrator | 2025-05-28 18:57:18.442862 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 18:57:18.442875 | orchestrator | 2025-05-28 18:57:18.442887 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 18:57:18.443170 | orchestrator | Wednesday 28 May 2025 18:57:18 +0000 (0:00:00.261) 0:00:00.380 ********* 2025-05-28 18:57:22.067299 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:22.068537 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:22.068630 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:22.068645 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:22.068656 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:22.068722 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:22.069018 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:22.069617 | orchestrator | 2025-05-28 18:57:22.070077 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-28 18:57:22.070365 | orchestrator | 2025-05-28 18:57:22.071054 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 18:57:22.071582 | orchestrator | Wednesday 28 May 2025 18:57:22 +0000 (0:00:03.630) 0:00:04.010 ********* 2025-05-28 18:57:22.141622 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-28 18:57:22.142551 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-28 18:57:22.162412 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-28 18:57:22.181754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-28 18:57:22.182766 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-28 18:57:22.228710 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-28 18:57:22.229159 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-28 18:57:22.230068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 18:57:22.230205 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-28 18:57:22.231189 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-28 18:57:22.550863 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:22.552227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 18:57:22.553844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-28 18:57:22.558275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-28 18:57:22.558312 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 18:57:22.558320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 18:57:22.559176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 18:57:22.559953 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 18:57:22.560963 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 18:57:22.561225 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 18:57:22.561859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-28 18:57:22.562978 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 18:57:22.563476 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-28 18:57:22.563903 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 18:57:22.565058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 18:57:22.565820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 18:57:22.566304 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 18:57:22.567319 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-28 18:57:22.568057 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-28 18:57:22.568573 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-28 18:57:22.569292 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 18:57:22.570347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 18:57:22.571080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 18:57:22.571709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 18:57:22.572590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-28 18:57:22.573289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 18:57:22.573667 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:57:22.574377 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 18:57:22.574843 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-28 18:57:22.575231 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 18:57:22.575730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-28 18:57:22.576074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 18:57:22.577134 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:57:22.577160 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-28 18:57:22.577368 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-28 18:57:22.577760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 18:57:22.578276 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 18:57:22.578465 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:57:22.580449 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-28 18:57:22.580527 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:57:22.580994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 18:57:22.581755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-28 18:57:22.581864 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:57:22.587991 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-28 18:57:22.588026 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-28 18:57:22.588034 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:57:22.588042 | orchestrator | 2025-05-28 18:57:22.588051 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-28 18:57:22.588059 | orchestrator | 2025-05-28 18:57:22.588067 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-28 18:57:22.588075 | orchestrator | Wednesday 28 May 2025 18:57:22 +0000 (0:00:00.482) 0:00:04.493 ********* 2025-05-28 18:57:22.639615 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:22.666833 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:22.692069 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:22.723490 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:22.783507 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:22.783623 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:22.784004 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:22.784554 | orchestrator | 2025-05-28 18:57:22.784749 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-28 18:57:22.785108 | orchestrator | Wednesday 28 May 2025 18:57:22 +0000 (0:00:00.232) 0:00:04.726 ********* 2025-05-28 18:57:24.026637 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:24.026755 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:24.026797 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:24.028347 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:24.030668 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:24.030840 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:24.031384 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:24.031748 | orchestrator | 2025-05-28 18:57:24.032224 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-28 18:57:24.032581 | orchestrator | Wednesday 28 May 2025 18:57:24 +0000 (0:00:01.242) 0:00:05.968 ********* 2025-05-28 18:57:25.392179 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:25.395156 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:25.398146 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:25.398209 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:25.398223 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:25.398276 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:25.399010 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:25.399774 | orchestrator | 2025-05-28 18:57:25.400532 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-28 18:57:25.401039 | orchestrator | Wednesday 28 May 2025 18:57:25 +0000 (0:00:01.363) 0:00:07.332 ********* 2025-05-28 18:57:25.691202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:57:25.691358 | orchestrator | 2025-05-28 18:57:25.692268 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-28 18:57:25.692691 | orchestrator | Wednesday 28 May 2025 18:57:25 +0000 (0:00:00.298) 0:00:07.630 ********* 2025-05-28 18:57:27.874700 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:57:27.875644 | orchestrator | changed: [testbed-manager] 2025-05-28 18:57:27.877813 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:57:27.881252 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:27.882505 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:27.883599 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:27.884930 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:57:27.885926 | orchestrator | 2025-05-28 18:57:27.886504 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-28 18:57:27.887366 | orchestrator | Wednesday 28 May 2025 18:57:27 +0000 (0:00:02.182) 0:00:09.813 ********* 2025-05-28 18:57:27.961726 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:28.180470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:57:28.180892 | orchestrator | 2025-05-28 18:57:28.181797 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-28 18:57:28.182431 | orchestrator | Wednesday 28 May 2025 18:57:28 +0000 (0:00:00.308) 0:00:10.122 ********* 2025-05-28 18:57:29.333692 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:57:29.333833 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:57:29.334794 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:29.335588 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:57:29.336590 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:29.337504 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:29.337613 | orchestrator | 2025-05-28 18:57:29.338184 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-28 18:57:29.339072 | orchestrator | Wednesday 28 May 2025 18:57:29 +0000 (0:00:01.145) 0:00:11.267 ********* 2025-05-28 18:57:29.421986 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:29.933441 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:57:29.933976 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:57:29.935243 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:57:29.936492 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:29.938292 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:29.939568 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:29.940159 | orchestrator | 2025-05-28 18:57:29.942604 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-28 18:57:29.943200 | orchestrator | Wednesday 28 May 2025 18:57:29 +0000 (0:00:00.607) 0:00:11.875 ********* 2025-05-28 18:57:30.060164 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:57:30.087690 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:57:30.115763 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:57:30.401514 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:57:30.401781 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:57:30.402875 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:57:30.402896 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:30.403306 | orchestrator | 2025-05-28 18:57:30.403810 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-28 18:57:30.404439 | orchestrator | Wednesday 28 May 2025 18:57:30 +0000 (0:00:00.467) 0:00:12.342 ********* 2025-05-28 18:57:30.507685 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:30.539992 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:57:30.569064 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:57:30.608981 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:57:30.682837 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:57:30.682983 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:57:30.683000 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:57:30.683470 | orchestrator | 2025-05-28 18:57:30.686966 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-28 18:57:30.687005 | orchestrator | Wednesday 28 May 2025 18:57:30 +0000 (0:00:00.281) 0:00:12.624 ********* 2025-05-28 18:57:31.014480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:57:31.014659 | orchestrator | 2025-05-28 18:57:31.014733 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-28 18:57:31.015428 | orchestrator | Wednesday 28 May 2025 18:57:31 +0000 (0:00:00.331) 0:00:12.955 ********* 2025-05-28 18:57:31.342453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:57:31.342616 | orchestrator | 2025-05-28 18:57:31.343158 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-28 18:57:31.345772 | orchestrator | Wednesday 28 May 2025 18:57:31 +0000 (0:00:00.328) 0:00:13.283 ********* 2025-05-28 18:57:32.500772 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:32.500942 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:32.501382 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:32.504035 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:32.504062 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:32.504150 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:32.504810 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:32.505500 | orchestrator | 2025-05-28 18:57:32.506239 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-28 18:57:32.506768 | orchestrator | Wednesday 28 May 2025 18:57:32 +0000 (0:00:01.157) 0:00:14.440 ********* 2025-05-28 18:57:32.591108 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:32.601225 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:57:32.628055 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:57:32.653428 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:57:32.712408 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:57:32.713036 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:57:32.713867 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:57:32.714279 | orchestrator | 2025-05-28 18:57:32.717872 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-28 18:57:32.717898 | orchestrator | Wednesday 28 May 2025 18:57:32 +0000 (0:00:00.212) 0:00:14.653 ********* 2025-05-28 18:57:33.271844 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:33.272969 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:33.274239 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:33.275833 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:33.276944 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:33.277734 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:33.278590 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:33.279881 | orchestrator | 2025-05-28 18:57:33.280289 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-28 18:57:33.281164 | orchestrator | Wednesday 28 May 2025 18:57:33 +0000 (0:00:00.557) 0:00:15.211 ********* 2025-05-28 18:57:33.347618 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:33.407396 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:57:33.436250 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:57:33.509151 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:57:33.510402 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:57:33.513210 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:57:33.513253 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:57:33.513765 | orchestrator | 2025-05-28 18:57:33.514921 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-28 18:57:33.516276 | orchestrator | Wednesday 28 May 2025 18:57:33 +0000 (0:00:00.239) 0:00:15.450 ********* 2025-05-28 18:57:34.068349 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:57:34.068538 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:57:34.069660 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:34.070290 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:57:34.070705 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:34.071795 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:34.072562 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:34.072828 | orchestrator | 2025-05-28 18:57:34.073624 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-28 18:57:34.074174 | orchestrator | Wednesday 28 May 2025 18:57:34 +0000 (0:00:00.552) 0:00:16.003 ********* 2025-05-28 18:57:35.175498 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:35.175690 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:57:35.176393 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:57:35.176928 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:57:35.177466 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:35.177833 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:35.178195 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:35.178646 | orchestrator | 2025-05-28 18:57:35.179280 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-28 18:57:35.179476 | orchestrator | Wednesday 28 May 2025 18:57:35 +0000 (0:00:01.111) 0:00:17.115 ********* 2025-05-28 18:57:36.336794 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:36.339652 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:36.341181 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:36.341745 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:36.343243 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:36.343563 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:36.344173 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:36.345136 | orchestrator | 2025-05-28 18:57:36.345525 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-28 18:57:36.346320 | orchestrator | Wednesday 28 May 2025 18:57:36 +0000 (0:00:01.161) 0:00:18.276 ********* 2025-05-28 18:57:36.663748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:57:36.664778 | orchestrator | 2025-05-28 18:57:36.666005 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-28 18:57:36.666682 | orchestrator | Wednesday 28 May 2025 18:57:36 +0000 (0:00:00.327) 0:00:18.603 ********* 2025-05-28 18:57:36.746654 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:38.122582 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:57:38.123346 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:38.124324 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:57:38.125279 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:38.126853 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:38.128271 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:57:38.128418 | orchestrator | 2025-05-28 18:57:38.130372 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-28 18:57:38.131223 | orchestrator | Wednesday 28 May 2025 18:57:38 +0000 (0:00:01.458) 0:00:20.062 ********* 2025-05-28 18:57:38.194760 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:38.223950 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:38.247230 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:38.276069 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:38.343228 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:38.343349 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:38.344506 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:38.345510 | orchestrator | 2025-05-28 18:57:38.346631 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-28 18:57:38.346840 | orchestrator | Wednesday 28 May 2025 18:57:38 +0000 (0:00:00.222) 0:00:20.285 ********* 2025-05-28 18:57:38.426705 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:38.461256 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:38.483166 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:38.518443 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:38.603965 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:38.604299 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:38.604335 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:38.604356 | orchestrator | 2025-05-28 18:57:38.604621 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-28 18:57:38.605037 | orchestrator | Wednesday 28 May 2025 18:57:38 +0000 (0:00:00.260) 0:00:20.545 ********* 2025-05-28 18:57:38.686388 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:38.712958 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:38.743156 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:38.768054 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:38.838834 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:38.838932 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:38.839132 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:38.839519 | orchestrator | 2025-05-28 18:57:38.839759 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-28 18:57:38.840503 | orchestrator | Wednesday 28 May 2025 18:57:38 +0000 (0:00:00.235) 0:00:20.781 ********* 2025-05-28 18:57:39.123114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:57:39.124144 | orchestrator | 2025-05-28 18:57:39.124705 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-28 18:57:39.127437 | orchestrator | Wednesday 28 May 2025 18:57:39 +0000 (0:00:00.282) 0:00:21.064 ********* 2025-05-28 18:57:39.638807 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:39.638994 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:39.639780 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:39.641205 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:39.641494 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:39.641756 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:39.642406 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:39.642782 | orchestrator | 2025-05-28 18:57:39.644631 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-28 18:57:39.644656 | orchestrator | Wednesday 28 May 2025 18:57:39 +0000 (0:00:00.516) 0:00:21.580 ********* 2025-05-28 18:57:39.752192 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:57:39.780967 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:57:39.806970 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:57:39.877009 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:57:39.877205 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:57:39.877879 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:57:39.879167 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:57:39.879854 | orchestrator | 2025-05-28 18:57:39.880223 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-28 18:57:39.884491 | orchestrator | Wednesday 28 May 2025 18:57:39 +0000 (0:00:00.239) 0:00:21.819 ********* 2025-05-28 18:57:40.936647 | orchestrator | changed: [testbed-manager] 2025-05-28 18:57:40.940455 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:40.943449 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:40.943711 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:40.944970 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:40.946324 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:40.947103 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:40.947537 | orchestrator | 2025-05-28 18:57:40.949115 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-28 18:57:40.949856 | orchestrator | Wednesday 28 May 2025 18:57:40 +0000 (0:00:01.056) 0:00:22.875 ********* 2025-05-28 18:57:41.481558 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:41.481895 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:41.484335 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:41.484356 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:41.484368 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:41.484533 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:41.485037 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:41.485131 | orchestrator | 2025-05-28 18:57:41.485635 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-28 18:57:41.486137 | orchestrator | Wednesday 28 May 2025 18:57:41 +0000 (0:00:00.546) 0:00:23.421 ********* 2025-05-28 18:57:42.627727 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:42.627836 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:42.627851 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:42.627862 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:42.628001 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:42.628542 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:42.629355 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:42.629452 | orchestrator | 2025-05-28 18:57:42.630471 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-28 18:57:42.630493 | orchestrator | Wednesday 28 May 2025 18:57:42 +0000 (0:00:01.145) 0:00:24.567 ********* 2025-05-28 18:57:56.526885 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:56.527007 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:56.527022 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:56.527035 | orchestrator | changed: [testbed-manager] 2025-05-28 18:57:56.527049 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:57:56.531992 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:57:56.532040 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:57:56.532060 | orchestrator | 2025-05-28 18:57:56.532100 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-28 18:57:56.532114 | orchestrator | Wednesday 28 May 2025 18:57:56 +0000 (0:00:13.892) 0:00:38.460 ********* 2025-05-28 18:57:56.613998 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:56.647832 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:56.682132 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:56.712808 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:56.782544 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:56.783169 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:56.783822 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:56.785452 | orchestrator | 2025-05-28 18:57:56.785888 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-28 18:57:56.787030 | orchestrator | Wednesday 28 May 2025 18:57:56 +0000 (0:00:00.263) 0:00:38.723 ********* 2025-05-28 18:57:56.854695 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:56.909867 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:56.944829 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:57.016213 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:57.017139 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:57.017629 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:57.018587 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:57.019215 | orchestrator | 2025-05-28 18:57:57.020865 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-28 18:57:57.020881 | orchestrator | Wednesday 28 May 2025 18:57:57 +0000 (0:00:00.234) 0:00:38.958 ********* 2025-05-28 18:57:57.098559 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:57.120872 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:57.148878 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:57.176354 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:57.245536 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:57.245799 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:57.246925 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:57.248428 | orchestrator | 2025-05-28 18:57:57.249325 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-28 18:57:57.250420 | orchestrator | Wednesday 28 May 2025 18:57:57 +0000 (0:00:00.228) 0:00:39.187 ********* 2025-05-28 18:57:57.535023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:57:57.535175 | orchestrator | 2025-05-28 18:57:57.535378 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-28 18:57:57.536174 | orchestrator | Wednesday 28 May 2025 18:57:57 +0000 (0:00:00.289) 0:00:39.477 ********* 2025-05-28 18:57:59.122329 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:57:59.122437 | orchestrator | ok: [testbed-manager] 2025-05-28 18:57:59.122452 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:57:59.123301 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:57:59.123556 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:57:59.124732 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:57:59.125952 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:57:59.126757 | orchestrator | 2025-05-28 18:57:59.127431 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-28 18:57:59.128296 | orchestrator | Wednesday 28 May 2025 18:57:59 +0000 (0:00:01.583) 0:00:41.060 ********* 2025-05-28 18:58:00.286282 | orchestrator | changed: [testbed-manager] 2025-05-28 18:58:00.287760 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:58:00.291723 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:58:00.291776 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:58:00.291789 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:58:00.292887 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:58:00.294334 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:58:00.294669 | orchestrator | 2025-05-28 18:58:00.295496 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-28 18:58:00.296468 | orchestrator | Wednesday 28 May 2025 18:58:00 +0000 (0:00:01.165) 0:00:42.226 ********* 2025-05-28 18:58:01.095633 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:01.095740 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:01.095836 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:01.096904 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:01.097412 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:01.097698 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:01.098190 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:01.098986 | orchestrator | 2025-05-28 18:58:01.099258 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-28 18:58:01.100086 | orchestrator | Wednesday 28 May 2025 18:58:01 +0000 (0:00:00.809) 0:00:43.036 ********* 2025-05-28 18:58:01.448154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:58:01.448267 | orchestrator | 2025-05-28 18:58:01.449202 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-28 18:58:01.449479 | orchestrator | Wednesday 28 May 2025 18:58:01 +0000 (0:00:00.353) 0:00:43.389 ********* 2025-05-28 18:58:02.481965 | orchestrator | changed: [testbed-manager] 2025-05-28 18:58:02.482419 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:58:02.483607 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:58:02.485262 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:58:02.485503 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:58:02.486057 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:58:02.486922 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:58:02.487249 | orchestrator | 2025-05-28 18:58:02.488263 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-28 18:58:02.488913 | orchestrator | Wednesday 28 May 2025 18:58:02 +0000 (0:00:01.033) 0:00:44.422 ********* 2025-05-28 18:58:02.586765 | orchestrator | skipping: [testbed-manager] 2025-05-28 18:58:02.614488 | orchestrator | skipping: [testbed-node-3] 2025-05-28 18:58:02.649982 | orchestrator | skipping: [testbed-node-4] 2025-05-28 18:58:02.791737 | orchestrator | skipping: [testbed-node-5] 2025-05-28 18:58:02.791913 | orchestrator | skipping: [testbed-node-0] 2025-05-28 18:58:02.792671 | orchestrator | skipping: [testbed-node-1] 2025-05-28 18:58:02.793558 | orchestrator | skipping: [testbed-node-2] 2025-05-28 18:58:02.794258 | orchestrator | 2025-05-28 18:58:02.795194 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-28 18:58:02.795909 | orchestrator | Wednesday 28 May 2025 18:58:02 +0000 (0:00:00.311) 0:00:44.733 ********* 2025-05-28 18:58:15.669128 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:58:15.669225 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:58:15.669236 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:58:15.669839 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:58:15.671938 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:58:15.672319 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:58:15.672945 | orchestrator | changed: [testbed-manager] 2025-05-28 18:58:15.673791 | orchestrator | 2025-05-28 18:58:15.674548 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-28 18:58:15.675364 | orchestrator | Wednesday 28 May 2025 18:58:15 +0000 (0:00:12.872) 0:00:57.606 ********* 2025-05-28 18:58:16.684929 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:16.685326 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:16.685919 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:16.687022 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:16.687825 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:16.688986 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:16.689272 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:16.689640 | orchestrator | 2025-05-28 18:58:16.690540 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-28 18:58:16.690793 | orchestrator | Wednesday 28 May 2025 18:58:16 +0000 (0:00:01.018) 0:00:58.624 ********* 2025-05-28 18:58:17.608040 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:17.609416 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:17.610517 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:17.611570 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:17.611635 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:17.612808 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:17.614176 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:17.614440 | orchestrator | 2025-05-28 18:58:17.615578 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-28 18:58:17.616229 | orchestrator | Wednesday 28 May 2025 18:58:17 +0000 (0:00:00.924) 0:00:59.548 ********* 2025-05-28 18:58:17.696570 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:17.724107 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:17.749745 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:17.783926 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:17.862506 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:17.863425 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:17.863979 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:17.864921 | orchestrator | 2025-05-28 18:58:17.865640 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-28 18:58:17.866147 | orchestrator | Wednesday 28 May 2025 18:58:17 +0000 (0:00:00.255) 0:00:59.804 ********* 2025-05-28 18:58:17.942436 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:17.977104 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:18.005221 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:18.047990 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:18.124120 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:18.125768 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:18.127268 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:18.128562 | orchestrator | 2025-05-28 18:58:18.129637 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-28 18:58:18.130567 | orchestrator | Wednesday 28 May 2025 18:58:18 +0000 (0:00:00.260) 0:01:00.065 ********* 2025-05-28 18:58:18.457859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 18:58:18.458921 | orchestrator | 2025-05-28 18:58:18.461288 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-28 18:58:18.461876 | orchestrator | Wednesday 28 May 2025 18:58:18 +0000 (0:00:00.334) 0:01:00.399 ********* 2025-05-28 18:58:20.005161 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:20.006780 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:20.007277 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:20.008435 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:20.011755 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:20.013609 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:20.014367 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:20.015226 | orchestrator | 2025-05-28 18:58:20.015709 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-28 18:58:20.016647 | orchestrator | Wednesday 28 May 2025 18:58:19 +0000 (0:00:01.544) 0:01:01.944 ********* 2025-05-28 18:58:20.559305 | orchestrator | changed: [testbed-manager] 2025-05-28 18:58:20.559650 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:58:20.560532 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:58:20.562528 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:58:20.563333 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:58:20.563680 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:58:20.564375 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:58:20.567127 | orchestrator | 2025-05-28 18:58:20.567607 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-28 18:58:20.568220 | orchestrator | Wednesday 28 May 2025 18:58:20 +0000 (0:00:00.555) 0:01:02.500 ********* 2025-05-28 18:58:20.661498 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:20.698153 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:20.720364 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:20.746002 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:20.824583 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:20.825521 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:20.829197 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:20.829276 | orchestrator | 2025-05-28 18:58:20.829293 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-28 18:58:20.830075 | orchestrator | Wednesday 28 May 2025 18:58:20 +0000 (0:00:00.264) 0:01:02.765 ********* 2025-05-28 18:58:21.986579 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:21.986689 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:21.987461 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:21.987534 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:21.988234 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:21.988933 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:21.989394 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:21.990261 | orchestrator | 2025-05-28 18:58:21.990545 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-28 18:58:21.991227 | orchestrator | Wednesday 28 May 2025 18:58:21 +0000 (0:00:01.160) 0:01:03.925 ********* 2025-05-28 18:58:23.720359 | orchestrator | changed: [testbed-node-5] 2025-05-28 18:58:23.720875 | orchestrator | changed: [testbed-manager] 2025-05-28 18:58:23.723792 | orchestrator | changed: [testbed-node-0] 2025-05-28 18:58:23.726362 | orchestrator | changed: [testbed-node-4] 2025-05-28 18:58:23.727268 | orchestrator | changed: [testbed-node-1] 2025-05-28 18:58:23.727301 | orchestrator | changed: [testbed-node-2] 2025-05-28 18:58:23.727768 | orchestrator | changed: [testbed-node-3] 2025-05-28 18:58:23.728396 | orchestrator | 2025-05-28 18:58:23.729376 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-28 18:58:23.729616 | orchestrator | Wednesday 28 May 2025 18:58:23 +0000 (0:00:01.735) 0:01:05.660 ********* 2025-05-28 18:58:25.813331 | orchestrator | ok: [testbed-manager] 2025-05-28 18:58:25.813540 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:58:25.813560 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:58:25.813572 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:58:25.813583 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:58:25.813594 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:58:25.814255 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:58:25.815971 | orchestrator | 2025-05-28 18:58:25.816232 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-28 18:58:25.816627 | orchestrator | Wednesday 28 May 2025 18:58:25 +0000 (0:00:02.091) 0:01:07.751 ********* 2025-05-28 18:59:03.362820 | orchestrator | ok: [testbed-manager] 2025-05-28 18:59:03.362934 | orchestrator | ok: [testbed-node-1] 2025-05-28 18:59:03.363254 | orchestrator | ok: [testbed-node-5] 2025-05-28 18:59:03.366247 | orchestrator | ok: [testbed-node-4] 2025-05-28 18:59:03.366966 | orchestrator | ok: [testbed-node-0] 2025-05-28 18:59:03.367372 | orchestrator | ok: [testbed-node-2] 2025-05-28 18:59:03.367740 | orchestrator | ok: [testbed-node-3] 2025-05-28 18:59:03.368071 | orchestrator | 2025-05-28 18:59:03.368476 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-28 18:59:03.368649 | orchestrator | Wednesday 28 May 2025 18:59:03 +0000 (0:00:37.548) 0:01:45.300 ********* 2025-05-28 19:00:25.895919 | orchestrator | changed: [testbed-manager] 2025-05-28 19:00:25.896106 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:00:25.896124 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:00:25.896136 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:00:25.896147 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:00:25.896158 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:00:25.896512 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:00:25.896685 | orchestrator | 2025-05-28 19:00:25.897658 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-28 19:00:25.897771 | orchestrator | Wednesday 28 May 2025 19:00:25 +0000 (0:01:22.530) 0:03:07.831 ********* 2025-05-28 19:00:27.451055 | orchestrator | ok: [testbed-manager] 2025-05-28 19:00:27.451220 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:00:27.453873 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:00:27.454754 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:00:27.455575 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:00:27.456431 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:00:27.457136 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:00:27.457884 | orchestrator | 2025-05-28 19:00:27.458407 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-28 19:00:27.459062 | orchestrator | Wednesday 28 May 2025 19:00:27 +0000 (0:00:01.559) 0:03:09.390 ********* 2025-05-28 19:00:39.944600 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:00:39.944720 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:00:39.944736 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:00:39.944791 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:00:39.944873 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:00:39.946221 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:00:39.946780 | orchestrator | changed: [testbed-manager] 2025-05-28 19:00:39.947344 | orchestrator | 2025-05-28 19:00:39.948084 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-28 19:00:39.948423 | orchestrator | Wednesday 28 May 2025 19:00:39 +0000 (0:00:12.491) 0:03:21.882 ********* 2025-05-28 19:00:40.262425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-28 19:00:40.262600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-28 19:00:40.262764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-28 19:00:40.263957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-28 19:00:40.264695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-28 19:00:40.265696 | orchestrator | 2025-05-28 19:00:40.266729 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-28 19:00:40.267302 | orchestrator | Wednesday 28 May 2025 19:00:40 +0000 (0:00:00.318) 0:03:22.201 ********* 2025-05-28 19:00:40.307393 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 19:00:40.348671 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:00:40.349138 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 19:00:40.350134 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 19:00:40.367590 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:00:40.392499 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:00:40.392571 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-28 19:00:40.409066 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:00:40.917340 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 19:00:40.917450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 19:00:40.919304 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 19:00:40.920278 | orchestrator | 2025-05-28 19:00:40.921219 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-28 19:00:40.922271 | orchestrator | Wednesday 28 May 2025 19:00:40 +0000 (0:00:00.658) 0:03:22.859 ********* 2025-05-28 19:00:40.988504 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 19:00:40.988621 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 19:00:40.988632 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 19:00:40.988690 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 19:00:40.988780 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 19:00:40.989093 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 19:00:40.989291 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 19:00:40.989597 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 19:00:40.989820 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 19:00:40.990069 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 19:00:40.990311 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 19:00:40.990717 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 19:00:40.990803 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 19:00:40.991163 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 19:00:40.991343 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 19:00:40.991568 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 19:00:41.019896 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 19:00:41.019940 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 19:00:41.020227 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 19:00:41.020506 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 19:00:41.020704 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 19:00:41.020960 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 19:00:41.021349 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 19:00:41.021597 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 19:00:41.021848 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 19:00:41.022144 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 19:00:41.022449 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 19:00:41.022602 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 19:00:41.022863 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 19:00:41.023134 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 19:00:41.023393 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-28 19:00:41.023644 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-28 19:00:41.045910 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:00:41.045948 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-28 19:00:41.046080 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-28 19:00:41.046918 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-28 19:00:41.048765 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-28 19:00:41.048808 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-28 19:00:41.049522 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-28 19:00:41.050412 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-28 19:00:41.067488 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-28 19:00:41.068627 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:00:45.588780 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:00:45.590132 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:00:45.591248 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-28 19:00:45.591976 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-28 19:00:45.593229 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-28 19:00:45.595159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-28 19:00:45.596327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-28 19:00:45.597205 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-28 19:00:45.598126 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-28 19:00:45.598974 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-28 19:00:45.600133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-28 19:00:45.601093 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-28 19:00:45.602072 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-28 19:00:45.602099 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-28 19:00:45.602516 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-28 19:00:45.603171 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-28 19:00:45.603268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-28 19:00:45.603598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-28 19:00:45.603923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-28 19:00:45.604341 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-28 19:00:45.604789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-28 19:00:45.604810 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-28 19:00:45.605112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-28 19:00:45.605134 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-28 19:00:45.605589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-28 19:00:45.605803 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-28 19:00:45.606661 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-28 19:00:45.607519 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-28 19:00:45.607882 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-28 19:00:45.608469 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-28 19:00:45.608766 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-28 19:00:45.609311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-28 19:00:45.609805 | orchestrator | 2025-05-28 19:00:45.610182 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-28 19:00:45.610432 | orchestrator | Wednesday 28 May 2025 19:00:45 +0000 (0:00:04.668) 0:03:27.528 ********* 2025-05-28 19:00:47.040248 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 19:00:47.040426 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 19:00:47.041467 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 19:00:47.042313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 19:00:47.043868 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 19:00:47.046065 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 19:00:47.046672 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-28 19:00:47.047324 | orchestrator | 2025-05-28 19:00:47.048244 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-28 19:00:47.048941 | orchestrator | Wednesday 28 May 2025 19:00:47 +0000 (0:00:01.451) 0:03:28.980 ********* 2025-05-28 19:00:47.098581 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 19:00:47.124761 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:00:47.215716 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 19:00:47.216146 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 19:00:47.541632 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:00:47.541747 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:00:47.542163 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-28 19:00:47.543434 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:00:47.544753 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-28 19:00:47.545726 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-28 19:00:47.546130 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-28 19:00:47.547443 | orchestrator | 2025-05-28 19:00:47.547687 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-28 19:00:47.548496 | orchestrator | Wednesday 28 May 2025 19:00:47 +0000 (0:00:00.502) 0:03:29.482 ********* 2025-05-28 19:00:47.600735 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 19:00:47.629759 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:00:47.709518 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 19:00:48.098610 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:00:48.098862 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 19:00:48.099982 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:00:48.100436 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-28 19:00:48.101397 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:00:48.101758 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-28 19:00:48.102353 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-28 19:00:48.102795 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-28 19:00:48.103246 | orchestrator | 2025-05-28 19:00:48.104000 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-28 19:00:48.109981 | orchestrator | Wednesday 28 May 2025 19:00:48 +0000 (0:00:00.557) 0:03:30.040 ********* 2025-05-28 19:00:48.167908 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:00:48.193579 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:00:48.216477 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:00:48.274679 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:00:48.400141 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:00:48.400246 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:00:48.400351 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:00:48.401144 | orchestrator | 2025-05-28 19:00:48.402841 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-28 19:00:48.405165 | orchestrator | Wednesday 28 May 2025 19:00:48 +0000 (0:00:00.296) 0:03:30.336 ********* 2025-05-28 19:00:54.166389 | orchestrator | ok: [testbed-manager] 2025-05-28 19:00:54.167133 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:00:54.169023 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:00:54.171363 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:00:54.171923 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:00:54.172900 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:00:54.173896 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:00:54.174995 | orchestrator | 2025-05-28 19:00:54.175844 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-28 19:00:54.175911 | orchestrator | Wednesday 28 May 2025 19:00:54 +0000 (0:00:05.769) 0:03:36.106 ********* 2025-05-28 19:00:54.254541 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-28 19:00:54.255238 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-28 19:00:54.297712 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:00:54.298597 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-28 19:00:54.342367 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:00:54.342507 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-28 19:00:54.386347 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:00:54.386430 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-28 19:00:54.422571 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:00:54.509937 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:00:54.512415 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-28 19:00:54.513369 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:00:54.514308 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-28 19:00:54.514442 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:00:54.514916 | orchestrator | 2025-05-28 19:00:54.515244 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-28 19:00:54.515674 | orchestrator | Wednesday 28 May 2025 19:00:54 +0000 (0:00:00.341) 0:03:36.448 ********* 2025-05-28 19:00:55.548613 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-28 19:00:55.552121 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-28 19:00:55.552172 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-28 19:00:55.552186 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-28 19:00:55.552197 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-28 19:00:55.552255 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-28 19:00:55.552758 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-28 19:00:55.553263 | orchestrator | 2025-05-28 19:00:55.553783 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-28 19:00:55.554422 | orchestrator | Wednesday 28 May 2025 19:00:55 +0000 (0:00:01.040) 0:03:37.488 ********* 2025-05-28 19:00:55.976313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:00:55.976445 | orchestrator | 2025-05-28 19:00:55.976861 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-28 19:00:55.977478 | orchestrator | Wednesday 28 May 2025 19:00:55 +0000 (0:00:00.429) 0:03:37.918 ********* 2025-05-28 19:00:57.311723 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:00:57.313079 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:00:57.313095 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:00:57.313700 | orchestrator | ok: [testbed-manager] 2025-05-28 19:00:57.314693 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:00:57.316360 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:00:57.317093 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:00:57.317614 | orchestrator | 2025-05-28 19:00:57.318935 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-28 19:00:57.318970 | orchestrator | Wednesday 28 May 2025 19:00:57 +0000 (0:00:01.333) 0:03:39.251 ********* 2025-05-28 19:00:57.892896 | orchestrator | ok: [testbed-manager] 2025-05-28 19:00:57.893045 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:00:57.893886 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:00:57.894567 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:00:57.895754 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:00:57.896255 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:00:57.897379 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:00:57.897829 | orchestrator | 2025-05-28 19:00:57.898967 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-28 19:00:57.899598 | orchestrator | Wednesday 28 May 2025 19:00:57 +0000 (0:00:00.581) 0:03:39.833 ********* 2025-05-28 19:00:58.529495 | orchestrator | changed: [testbed-manager] 2025-05-28 19:00:58.529724 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:00:58.532207 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:00:58.533177 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:00:58.533525 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:00:58.534502 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:00:58.535069 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:00:58.535801 | orchestrator | 2025-05-28 19:00:58.536104 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-28 19:00:58.537040 | orchestrator | Wednesday 28 May 2025 19:00:58 +0000 (0:00:00.634) 0:03:40.468 ********* 2025-05-28 19:00:59.122201 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:00:59.122335 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:00:59.123210 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:00:59.123526 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:00:59.124618 | orchestrator | ok: [testbed-manager] 2025-05-28 19:00:59.128140 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:00:59.128964 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:00:59.129463 | orchestrator | 2025-05-28 19:00:59.130630 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-28 19:00:59.131442 | orchestrator | Wednesday 28 May 2025 19:00:59 +0000 (0:00:00.596) 0:03:41.064 ********* 2025-05-28 19:01:00.082407 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748456974.8025868, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.082525 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748457008.390243, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.082712 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748457007.5314915, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.083672 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748457008.3608148, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.084187 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748457010.4292796, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.085779 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748457008.9230201, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.087032 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748457010.6969874, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.088126 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748456997.466004, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.088908 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748456925.7998173, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.089482 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748456926.7094672, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.090400 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748456927.3071132, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.091143 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748456925.1369922, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.091762 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748456928.2894151, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.092792 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748456929.2383904, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:01:00.093769 | orchestrator | 2025-05-28 19:01:00.094246 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-28 19:01:00.097036 | orchestrator | Wednesday 28 May 2025 19:01:00 +0000 (0:00:00.957) 0:03:42.022 ********* 2025-05-28 19:01:01.221680 | orchestrator | changed: [testbed-manager] 2025-05-28 19:01:01.224173 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:01:01.224221 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:01:01.224892 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:01:01.225701 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:01:01.226240 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:01:01.227623 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:01:01.228616 | orchestrator | 2025-05-28 19:01:01.229050 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-28 19:01:01.231194 | orchestrator | Wednesday 28 May 2025 19:01:01 +0000 (0:00:01.138) 0:03:43.160 ********* 2025-05-28 19:01:02.370544 | orchestrator | changed: [testbed-manager] 2025-05-28 19:01:02.371350 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:01:02.372646 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:01:02.373942 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:01:02.374770 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:01:02.377053 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:01:02.377702 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:01:02.378277 | orchestrator | 2025-05-28 19:01:02.379350 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-28 19:01:02.379540 | orchestrator | Wednesday 28 May 2025 19:01:02 +0000 (0:00:01.150) 0:03:44.311 ********* 2025-05-28 19:01:02.457335 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:01:02.487571 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:01:02.536544 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:01:02.576281 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:01:02.690525 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:01:02.690628 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:01:02.691258 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:01:02.691935 | orchestrator | 2025-05-28 19:01:02.692632 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-28 19:01:02.692957 | orchestrator | Wednesday 28 May 2025 19:01:02 +0000 (0:00:00.320) 0:03:44.631 ********* 2025-05-28 19:01:03.469137 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:03.471367 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:01:03.472025 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:01:03.473256 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:01:03.474189 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:01:03.474775 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:01:03.476603 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:01:03.476626 | orchestrator | 2025-05-28 19:01:03.476640 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-28 19:01:03.477159 | orchestrator | Wednesday 28 May 2025 19:01:03 +0000 (0:00:00.775) 0:03:45.407 ********* 2025-05-28 19:01:03.883932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:01:03.884547 | orchestrator | 2025-05-28 19:01:03.885987 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-28 19:01:03.887127 | orchestrator | Wednesday 28 May 2025 19:01:03 +0000 (0:00:00.415) 0:03:45.822 ********* 2025-05-28 19:01:11.399503 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:11.399624 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:01:11.400848 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:01:11.401576 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:01:11.403095 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:01:11.404312 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:01:11.404962 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:01:11.405490 | orchestrator | 2025-05-28 19:01:11.406398 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-28 19:01:11.406712 | orchestrator | Wednesday 28 May 2025 19:01:11 +0000 (0:00:07.516) 0:03:53.339 ********* 2025-05-28 19:01:12.553949 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:01:12.556398 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:01:12.556451 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:01:12.557402 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:01:12.557528 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:12.557603 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:01:12.558429 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:01:12.558964 | orchestrator | 2025-05-28 19:01:12.559451 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-28 19:01:12.560278 | orchestrator | Wednesday 28 May 2025 19:01:12 +0000 (0:00:01.153) 0:03:54.493 ********* 2025-05-28 19:01:13.580575 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:13.581035 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:01:13.581778 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:01:13.585739 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:01:13.585862 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:01:13.585887 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:01:13.585906 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:01:13.585925 | orchestrator | 2025-05-28 19:01:13.586071 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-28 19:01:13.586524 | orchestrator | Wednesday 28 May 2025 19:01:13 +0000 (0:00:01.026) 0:03:55.520 ********* 2025-05-28 19:01:14.052298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:01:14.052659 | orchestrator | 2025-05-28 19:01:14.052987 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-28 19:01:14.053776 | orchestrator | Wednesday 28 May 2025 19:01:14 +0000 (0:00:00.472) 0:03:55.992 ********* 2025-05-28 19:01:22.746609 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:01:22.747691 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:01:22.748659 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:01:22.751491 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:01:22.752168 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:01:22.752739 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:01:22.753815 | orchestrator | changed: [testbed-manager] 2025-05-28 19:01:22.754582 | orchestrator | 2025-05-28 19:01:22.755452 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-28 19:01:22.756462 | orchestrator | Wednesday 28 May 2025 19:01:22 +0000 (0:00:08.695) 0:04:04.687 ********* 2025-05-28 19:01:23.526757 | orchestrator | changed: [testbed-manager] 2025-05-28 19:01:23.527298 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:01:23.528533 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:01:23.530189 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:01:23.530596 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:01:23.531507 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:01:23.532687 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:01:23.533396 | orchestrator | 2025-05-28 19:01:23.534106 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-28 19:01:23.535500 | orchestrator | Wednesday 28 May 2025 19:01:23 +0000 (0:00:00.779) 0:04:05.467 ********* 2025-05-28 19:01:24.760052 | orchestrator | changed: [testbed-manager] 2025-05-28 19:01:24.761261 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:01:24.762565 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:01:24.763374 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:01:24.765072 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:01:24.765224 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:01:24.766704 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:01:24.766719 | orchestrator | 2025-05-28 19:01:24.766727 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-28 19:01:24.766735 | orchestrator | Wednesday 28 May 2025 19:01:24 +0000 (0:00:01.234) 0:04:06.701 ********* 2025-05-28 19:01:25.931812 | orchestrator | changed: [testbed-manager] 2025-05-28 19:01:25.931925 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:01:25.934298 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:01:25.934320 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:01:25.934982 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:01:25.937410 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:01:25.938176 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:01:25.941793 | orchestrator | 2025-05-28 19:01:25.945408 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-28 19:01:25.946271 | orchestrator | Wednesday 28 May 2025 19:01:25 +0000 (0:00:01.170) 0:04:07.871 ********* 2025-05-28 19:01:26.045864 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:26.083386 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:01:26.117749 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:01:26.152761 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:01:26.233606 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:01:26.233898 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:01:26.234982 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:01:26.235663 | orchestrator | 2025-05-28 19:01:26.237295 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-28 19:01:26.237340 | orchestrator | Wednesday 28 May 2025 19:01:26 +0000 (0:00:00.304) 0:04:08.176 ********* 2025-05-28 19:01:26.373092 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:26.416554 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:01:26.457318 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:01:26.489551 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:01:26.575123 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:01:26.575692 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:01:26.576883 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:01:26.577933 | orchestrator | 2025-05-28 19:01:26.578811 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-28 19:01:26.579876 | orchestrator | Wednesday 28 May 2025 19:01:26 +0000 (0:00:00.340) 0:04:08.516 ********* 2025-05-28 19:01:26.674469 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:26.714511 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:01:26.747209 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:01:26.783608 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:01:26.872531 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:01:26.873939 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:01:26.875230 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:01:26.876788 | orchestrator | 2025-05-28 19:01:26.877727 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-28 19:01:26.879876 | orchestrator | Wednesday 28 May 2025 19:01:26 +0000 (0:00:00.296) 0:04:08.813 ********* 2025-05-28 19:01:32.583459 | orchestrator | ok: [testbed-manager] 2025-05-28 19:01:32.583589 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:01:32.583693 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:01:32.584186 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:01:32.584884 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:01:32.584929 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:01:32.585697 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:01:32.586120 | orchestrator | 2025-05-28 19:01:32.586318 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-28 19:01:32.586736 | orchestrator | Wednesday 28 May 2025 19:01:32 +0000 (0:00:05.710) 0:04:14.524 ********* 2025-05-28 19:01:33.080564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:01:33.080783 | orchestrator | 2025-05-28 19:01:33.083714 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-28 19:01:33.083881 | orchestrator | Wednesday 28 May 2025 19:01:33 +0000 (0:00:00.496) 0:04:15.020 ********* 2025-05-28 19:01:33.155858 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-28 19:01:33.155957 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-28 19:01:33.202830 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:01:33.203124 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-28 19:01:33.203707 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-28 19:01:33.204324 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-28 19:01:33.205301 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-28 19:01:33.260708 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:01:33.261217 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-28 19:01:33.262110 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-28 19:01:33.295280 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:01:33.343599 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:01:33.343694 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-28 19:01:33.343709 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-28 19:01:33.415324 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:01:33.415517 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-28 19:01:33.416191 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-28 19:01:33.416924 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:01:33.417086 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-28 19:01:33.419044 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-28 19:01:33.419949 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:01:33.420686 | orchestrator | 2025-05-28 19:01:33.421851 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-28 19:01:33.422184 | orchestrator | Wednesday 28 May 2025 19:01:33 +0000 (0:00:00.336) 0:04:15.357 ********* 2025-05-28 19:01:33.866610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:01:33.866785 | orchestrator | 2025-05-28 19:01:33.867267 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-28 19:01:33.867892 | orchestrator | Wednesday 28 May 2025 19:01:33 +0000 (0:00:00.451) 0:04:15.808 ********* 2025-05-28 19:01:33.959370 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-28 19:01:33.960449 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-28 19:01:33.993268 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:01:34.036966 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-28 19:01:34.037241 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:01:34.038112 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-28 19:01:34.073865 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:01:34.074166 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-28 19:01:34.140167 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:01:34.140269 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-28 19:01:34.218774 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:01:34.219225 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:01:34.219946 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-28 19:01:34.221575 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:01:34.221608 | orchestrator | 2025-05-28 19:01:34.221622 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-28 19:01:34.221813 | orchestrator | Wednesday 28 May 2025 19:01:34 +0000 (0:00:00.350) 0:04:16.158 ********* 2025-05-28 19:01:34.657472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:01:34.657901 | orchestrator | 2025-05-28 19:01:34.658754 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-28 19:01:34.659010 | orchestrator | Wednesday 28 May 2025 19:01:34 +0000 (0:00:00.440) 0:04:16.599 ********* 2025-05-28 19:02:07.149384 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:07.150770 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:07.150793 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:07.152845 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:07.152857 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:07.153101 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:07.154629 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:07.155719 | orchestrator | 2025-05-28 19:02:07.155731 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-28 19:02:07.155737 | orchestrator | Wednesday 28 May 2025 19:02:07 +0000 (0:00:32.490) 0:04:49.090 ********* 2025-05-28 19:02:14.007197 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:14.008393 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:14.009431 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:14.010538 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:14.012722 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:14.013407 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:14.014941 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:14.015694 | orchestrator | 2025-05-28 19:02:14.016602 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-28 19:02:14.017214 | orchestrator | Wednesday 28 May 2025 19:02:13 +0000 (0:00:06.857) 0:04:55.947 ********* 2025-05-28 19:02:20.572301 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:20.572734 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:20.573639 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:20.575159 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:20.575410 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:20.576516 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:20.576718 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:20.577807 | orchestrator | 2025-05-28 19:02:20.578366 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-28 19:02:20.579019 | orchestrator | Wednesday 28 May 2025 19:02:20 +0000 (0:00:06.563) 0:05:02.511 ********* 2025-05-28 19:02:21.959932 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:02:21.960093 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:02:21.960938 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:02:21.962270 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:02:21.963381 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:02:21.964432 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:02:21.964581 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:21.965900 | orchestrator | 2025-05-28 19:02:21.966215 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-28 19:02:21.966817 | orchestrator | Wednesday 28 May 2025 19:02:21 +0000 (0:00:01.389) 0:05:03.900 ********* 2025-05-28 19:02:27.273666 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:27.273781 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:27.274740 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:27.275109 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:27.275457 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:27.275917 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:27.277709 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:27.278203 | orchestrator | 2025-05-28 19:02:27.278882 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-28 19:02:27.279276 | orchestrator | Wednesday 28 May 2025 19:02:27 +0000 (0:00:05.308) 0:05:09.209 ********* 2025-05-28 19:02:27.726447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:02:27.726582 | orchestrator | 2025-05-28 19:02:27.726596 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-28 19:02:27.726660 | orchestrator | Wednesday 28 May 2025 19:02:27 +0000 (0:00:00.457) 0:05:09.667 ********* 2025-05-28 19:02:28.439882 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:28.440055 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:28.440073 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:28.440085 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:28.440096 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:28.440107 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:28.441024 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:28.442264 | orchestrator | 2025-05-28 19:02:28.442298 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-28 19:02:28.442560 | orchestrator | Wednesday 28 May 2025 19:02:28 +0000 (0:00:00.707) 0:05:10.374 ********* 2025-05-28 19:02:29.761949 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:02:29.762211 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:02:29.762566 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:02:29.765109 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:02:29.765144 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:02:29.765156 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:29.766181 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:02:29.766251 | orchestrator | 2025-05-28 19:02:29.766654 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-28 19:02:29.767535 | orchestrator | Wednesday 28 May 2025 19:02:29 +0000 (0:00:01.325) 0:05:11.700 ********* 2025-05-28 19:02:30.539075 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:30.539200 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:30.539215 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:30.539295 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:30.539732 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:30.541769 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:30.541829 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:30.541873 | orchestrator | 2025-05-28 19:02:30.541928 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-28 19:02:30.542469 | orchestrator | Wednesday 28 May 2025 19:02:30 +0000 (0:00:00.778) 0:05:12.479 ********* 2025-05-28 19:02:30.608600 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:02:30.657802 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:02:30.693434 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:02:30.723124 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:02:30.751284 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:02:30.809201 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:02:30.811205 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:02:30.812733 | orchestrator | 2025-05-28 19:02:30.812777 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-28 19:02:30.812791 | orchestrator | Wednesday 28 May 2025 19:02:30 +0000 (0:00:00.272) 0:05:12.752 ********* 2025-05-28 19:02:30.864769 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:02:30.890464 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:02:30.919563 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:02:30.946486 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:02:30.980157 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:02:31.152180 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:02:31.152292 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:02:31.156031 | orchestrator | 2025-05-28 19:02:31.156073 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-28 19:02:31.156086 | orchestrator | Wednesday 28 May 2025 19:02:31 +0000 (0:00:00.342) 0:05:13.094 ********* 2025-05-28 19:02:31.238701 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:31.267788 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:02:31.295860 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:02:31.342790 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:02:31.408635 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:02:31.408776 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:02:31.413817 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:02:31.414634 | orchestrator | 2025-05-28 19:02:31.415308 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-28 19:02:31.415958 | orchestrator | Wednesday 28 May 2025 19:02:31 +0000 (0:00:00.255) 0:05:13.349 ********* 2025-05-28 19:02:31.493020 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:02:31.519412 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:02:31.543559 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:02:31.573955 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:02:31.639204 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:02:31.639402 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:02:31.639671 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:02:31.640095 | orchestrator | 2025-05-28 19:02:31.640597 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-28 19:02:31.641069 | orchestrator | Wednesday 28 May 2025 19:02:31 +0000 (0:00:00.232) 0:05:13.582 ********* 2025-05-28 19:02:31.706668 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:31.785464 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:02:31.813336 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:02:31.843847 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:02:31.906179 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:02:31.907420 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:02:31.908555 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:02:31.909844 | orchestrator | 2025-05-28 19:02:31.910249 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-28 19:02:31.911324 | orchestrator | Wednesday 28 May 2025 19:02:31 +0000 (0:00:00.265) 0:05:13.848 ********* 2025-05-28 19:02:31.987689 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:02:32.014258 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:02:32.040492 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:02:32.079571 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:02:32.149114 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:02:32.149198 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:02:32.149560 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:02:32.150942 | orchestrator | 2025-05-28 19:02:32.152330 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-28 19:02:32.152446 | orchestrator | Wednesday 28 May 2025 19:02:32 +0000 (0:00:00.242) 0:05:14.090 ********* 2025-05-28 19:02:32.250315 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:02:32.282378 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:02:32.321898 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:02:32.351762 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:02:32.422072 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:02:32.423513 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:02:32.425989 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:02:32.427219 | orchestrator | 2025-05-28 19:02:32.428308 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-28 19:02:32.429277 | orchestrator | Wednesday 28 May 2025 19:02:32 +0000 (0:00:00.272) 0:05:14.362 ********* 2025-05-28 19:02:32.854420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:02:32.857627 | orchestrator | 2025-05-28 19:02:32.857663 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-28 19:02:32.857677 | orchestrator | Wednesday 28 May 2025 19:02:32 +0000 (0:00:00.433) 0:05:14.796 ********* 2025-05-28 19:02:33.562599 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:33.562706 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:02:33.563154 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:02:33.564513 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:02:33.564997 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:02:33.565750 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:02:33.567329 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:02:33.568182 | orchestrator | 2025-05-28 19:02:33.568604 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-28 19:02:33.569634 | orchestrator | Wednesday 28 May 2025 19:02:33 +0000 (0:00:00.705) 0:05:15.501 ********* 2025-05-28 19:02:35.978826 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:02:35.979788 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:02:35.980242 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:02:35.982818 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:02:35.983253 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:02:35.983665 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:02:35.984277 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:35.985208 | orchestrator | 2025-05-28 19:02:35.986631 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-28 19:02:35.987148 | orchestrator | Wednesday 28 May 2025 19:02:35 +0000 (0:00:02.418) 0:05:17.920 ********* 2025-05-28 19:02:36.049503 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-28 19:02:36.049826 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-28 19:02:36.120812 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-28 19:02:36.121109 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-28 19:02:36.121445 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-28 19:02:36.122231 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-28 19:02:36.180214 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:02:36.180335 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-28 19:02:36.180797 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-28 19:02:36.281217 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:02:36.282454 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-28 19:02:36.282990 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-28 19:02:36.285579 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-28 19:02:36.285605 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-28 19:02:36.357506 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:02:36.357652 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-28 19:02:36.358780 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-28 19:02:36.360013 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-28 19:02:36.429466 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:02:36.429871 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-28 19:02:36.431618 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-28 19:02:36.432158 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-28 19:02:36.565755 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:02:36.566244 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:02:36.568130 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-28 19:02:36.568930 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-28 19:02:36.570152 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-28 19:02:36.571386 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:02:36.572432 | orchestrator | 2025-05-28 19:02:36.573143 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-28 19:02:36.574122 | orchestrator | Wednesday 28 May 2025 19:02:36 +0000 (0:00:00.586) 0:05:18.506 ********* 2025-05-28 19:02:41.606305 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:41.606446 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:41.607548 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:41.607625 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:41.609773 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:41.609793 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:41.609805 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:41.611406 | orchestrator | 2025-05-28 19:02:41.611943 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-28 19:02:41.612431 | orchestrator | Wednesday 28 May 2025 19:02:41 +0000 (0:00:05.038) 0:05:23.545 ********* 2025-05-28 19:02:42.611058 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:42.611644 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:42.611701 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:42.611724 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:42.611811 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:42.612717 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:42.613367 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:42.614179 | orchestrator | 2025-05-28 19:02:42.616259 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-28 19:02:42.616659 | orchestrator | Wednesday 28 May 2025 19:02:42 +0000 (0:00:01.005) 0:05:24.550 ********* 2025-05-28 19:02:49.273084 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:49.274119 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:49.274302 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:49.277274 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:49.278065 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:49.282068 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:49.282108 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:49.282121 | orchestrator | 2025-05-28 19:02:49.282335 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-28 19:02:49.283339 | orchestrator | Wednesday 28 May 2025 19:02:49 +0000 (0:00:06.661) 0:05:31.212 ********* 2025-05-28 19:02:52.090363 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:52.090552 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:52.091613 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:52.092888 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:52.093835 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:52.095881 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:52.096115 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:52.097274 | orchestrator | 2025-05-28 19:02:52.099254 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-28 19:02:52.099580 | orchestrator | Wednesday 28 May 2025 19:02:52 +0000 (0:00:02.818) 0:05:34.031 ********* 2025-05-28 19:02:53.386312 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:53.387414 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:53.387900 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:53.388849 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:53.391175 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:53.391210 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:53.391221 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:53.391233 | orchestrator | 2025-05-28 19:02:53.391775 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-28 19:02:53.392418 | orchestrator | Wednesday 28 May 2025 19:02:53 +0000 (0:00:01.293) 0:05:35.324 ********* 2025-05-28 19:02:54.967443 | orchestrator | ok: [testbed-manager] 2025-05-28 19:02:54.967589 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:02:54.967617 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:02:54.967635 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:02:54.968258 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:02:54.969442 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:02:54.971773 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:02:54.972423 | orchestrator | 2025-05-28 19:02:54.973763 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-28 19:02:54.974106 | orchestrator | Wednesday 28 May 2025 19:02:54 +0000 (0:00:01.580) 0:05:36.905 ********* 2025-05-28 19:02:55.160913 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:02:55.223445 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:02:55.302389 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:02:55.363837 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:02:55.558539 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:02:55.558634 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:02:55.558648 | orchestrator | changed: [testbed-manager] 2025-05-28 19:02:55.561258 | orchestrator | 2025-05-28 19:02:55.561333 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-28 19:02:55.561413 | orchestrator | Wednesday 28 May 2025 19:02:55 +0000 (0:00:00.594) 0:05:37.500 ********* 2025-05-28 19:03:04.009332 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:04.009469 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:04.010004 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:04.010436 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:04.014785 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:04.015010 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:04.015819 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:04.017296 | orchestrator | 2025-05-28 19:03:04.017699 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-28 19:03:04.018476 | orchestrator | Wednesday 28 May 2025 19:03:03 +0000 (0:00:08.445) 0:05:45.946 ********* 2025-05-28 19:03:04.983931 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:04.984192 | orchestrator | changed: [testbed-manager] 2025-05-28 19:03:04.985270 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:04.986273 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:04.987565 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:04.988669 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:04.990290 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:04.990832 | orchestrator | 2025-05-28 19:03:04.991580 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-28 19:03:04.992692 | orchestrator | Wednesday 28 May 2025 19:03:04 +0000 (0:00:00.977) 0:05:46.923 ********* 2025-05-28 19:03:16.186333 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:16.186480 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:16.186498 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:16.186763 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:16.187815 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:16.188991 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:16.189671 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:16.189792 | orchestrator | 2025-05-28 19:03:16.190290 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-28 19:03:16.191227 | orchestrator | Wednesday 28 May 2025 19:03:16 +0000 (0:00:11.198) 0:05:58.122 ********* 2025-05-28 19:03:28.669502 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:28.669616 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:28.670359 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:28.670388 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:28.672698 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:28.673503 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:28.674200 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:28.674544 | orchestrator | 2025-05-28 19:03:28.675213 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-28 19:03:28.675854 | orchestrator | Wednesday 28 May 2025 19:03:28 +0000 (0:00:12.483) 0:06:10.605 ********* 2025-05-28 19:03:29.082692 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-28 19:03:29.869699 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-28 19:03:29.869829 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-28 19:03:29.869845 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-28 19:03:29.870875 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-28 19:03:29.872259 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-28 19:03:29.873083 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-28 19:03:29.874073 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-28 19:03:29.874576 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-28 19:03:29.875466 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-28 19:03:29.876320 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-28 19:03:29.877196 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-28 19:03:29.877668 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-28 19:03:29.879259 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-28 19:03:29.879979 | orchestrator | 2025-05-28 19:03:29.882236 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-28 19:03:29.883129 | orchestrator | Wednesday 28 May 2025 19:03:29 +0000 (0:00:01.203) 0:06:11.808 ********* 2025-05-28 19:03:30.008472 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:03:30.075518 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:03:30.147408 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:03:30.211508 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:03:30.291290 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:03:30.423541 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:03:30.423736 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:03:30.426369 | orchestrator | 2025-05-28 19:03:30.427173 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-28 19:03:30.428023 | orchestrator | Wednesday 28 May 2025 19:03:30 +0000 (0:00:00.556) 0:06:12.364 ********* 2025-05-28 19:03:33.802250 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:33.802639 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:33.803414 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:33.804758 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:33.805096 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:33.805641 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:33.806118 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:33.806640 | orchestrator | 2025-05-28 19:03:33.807177 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-28 19:03:33.807709 | orchestrator | Wednesday 28 May 2025 19:03:33 +0000 (0:00:03.377) 0:06:15.742 ********* 2025-05-28 19:03:33.939647 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:03:34.000827 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:03:34.071048 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:03:34.315265 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:03:34.378526 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:03:34.467776 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:03:34.468416 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:03:34.469175 | orchestrator | 2025-05-28 19:03:34.469920 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-28 19:03:34.475570 | orchestrator | Wednesday 28 May 2025 19:03:34 +0000 (0:00:00.666) 0:06:16.408 ********* 2025-05-28 19:03:34.545596 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-28 19:03:34.546097 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-28 19:03:34.615490 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:03:34.615616 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-28 19:03:34.615642 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-28 19:03:34.687068 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:03:34.687227 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-28 19:03:34.687583 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-28 19:03:34.763332 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:03:34.764162 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-28 19:03:34.764803 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-28 19:03:34.833450 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:03:34.833925 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-28 19:03:34.834875 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-28 19:03:34.904602 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:03:34.904704 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-28 19:03:34.905138 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-28 19:03:35.022399 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:03:35.024172 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-28 19:03:35.024524 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-28 19:03:35.025343 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:03:35.026903 | orchestrator | 2025-05-28 19:03:35.026973 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-28 19:03:35.027236 | orchestrator | Wednesday 28 May 2025 19:03:35 +0000 (0:00:00.556) 0:06:16.964 ********* 2025-05-28 19:03:35.240483 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:03:35.359101 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:03:35.465444 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:03:35.532207 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:03:35.603326 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:03:35.700908 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:03:35.701874 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:03:35.702278 | orchestrator | 2025-05-28 19:03:35.705909 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-28 19:03:35.705999 | orchestrator | Wednesday 28 May 2025 19:03:35 +0000 (0:00:00.676) 0:06:17.640 ********* 2025-05-28 19:03:35.854914 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:03:35.919776 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:03:35.983660 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:03:36.054332 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:03:36.115448 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:03:36.230303 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:03:36.231092 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:03:36.231772 | orchestrator | 2025-05-28 19:03:36.234824 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-28 19:03:36.234877 | orchestrator | Wednesday 28 May 2025 19:03:36 +0000 (0:00:00.529) 0:06:18.170 ********* 2025-05-28 19:03:36.363508 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:03:36.437838 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:03:36.509469 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:03:36.573353 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:03:36.639120 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:03:36.774436 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:03:36.775325 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:03:36.778900 | orchestrator | 2025-05-28 19:03:36.778985 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-28 19:03:36.779002 | orchestrator | Wednesday 28 May 2025 19:03:36 +0000 (0:00:00.544) 0:06:18.715 ********* 2025-05-28 19:03:41.975665 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:41.975809 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:41.976761 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:41.977361 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:41.978157 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:41.982713 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:41.983297 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:41.984052 | orchestrator | 2025-05-28 19:03:41.984295 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-28 19:03:41.985135 | orchestrator | Wednesday 28 May 2025 19:03:41 +0000 (0:00:05.198) 0:06:23.914 ********* 2025-05-28 19:03:42.865369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:03:42.865724 | orchestrator | 2025-05-28 19:03:42.866715 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-28 19:03:42.867688 | orchestrator | Wednesday 28 May 2025 19:03:42 +0000 (0:00:00.889) 0:06:24.803 ********* 2025-05-28 19:03:43.312590 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:43.705484 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:43.706124 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:43.706699 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:43.707383 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:43.708203 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:43.709326 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:43.710657 | orchestrator | 2025-05-28 19:03:43.711599 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-28 19:03:43.712572 | orchestrator | Wednesday 28 May 2025 19:03:43 +0000 (0:00:00.842) 0:06:25.646 ********* 2025-05-28 19:03:44.118184 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:44.507422 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:44.508776 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:44.509237 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:44.509884 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:44.511243 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:44.511872 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:44.512358 | orchestrator | 2025-05-28 19:03:44.513599 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-28 19:03:44.514167 | orchestrator | Wednesday 28 May 2025 19:03:44 +0000 (0:00:00.800) 0:06:26.446 ********* 2025-05-28 19:03:45.980395 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:45.980589 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:45.981148 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:45.981631 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:45.982293 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:45.982744 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:45.983230 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:45.983832 | orchestrator | 2025-05-28 19:03:45.984467 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-28 19:03:45.984809 | orchestrator | Wednesday 28 May 2025 19:03:45 +0000 (0:00:01.472) 0:06:27.919 ********* 2025-05-28 19:03:46.111090 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:03:47.290221 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:03:47.290448 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:03:47.293839 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:03:47.294089 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:03:47.294118 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:03:47.294130 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:03:47.294517 | orchestrator | 2025-05-28 19:03:47.295386 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-28 19:03:47.296027 | orchestrator | Wednesday 28 May 2025 19:03:47 +0000 (0:00:01.309) 0:06:29.229 ********* 2025-05-28 19:03:48.539724 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:48.539896 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:48.540552 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:48.542130 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:48.542161 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:48.543455 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:48.543849 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:48.544411 | orchestrator | 2025-05-28 19:03:48.544810 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-28 19:03:48.545740 | orchestrator | Wednesday 28 May 2025 19:03:48 +0000 (0:00:01.249) 0:06:30.478 ********* 2025-05-28 19:03:49.885855 | orchestrator | changed: [testbed-manager] 2025-05-28 19:03:49.886390 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:49.889230 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:49.889804 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:49.892338 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:49.892890 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:49.893775 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:49.897069 | orchestrator | 2025-05-28 19:03:49.897765 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-28 19:03:49.898110 | orchestrator | Wednesday 28 May 2025 19:03:49 +0000 (0:00:01.346) 0:06:31.825 ********* 2025-05-28 19:03:50.958530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:03:50.958763 | orchestrator | 2025-05-28 19:03:50.959437 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-28 19:03:50.965682 | orchestrator | Wednesday 28 May 2025 19:03:50 +0000 (0:00:01.072) 0:06:32.898 ********* 2025-05-28 19:03:52.271653 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:03:52.274696 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:03:52.275550 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:52.276136 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:03:52.276992 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:03:52.277212 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:03:52.278187 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:03:52.278493 | orchestrator | 2025-05-28 19:03:52.280064 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-28 19:03:52.280644 | orchestrator | Wednesday 28 May 2025 19:03:52 +0000 (0:00:01.312) 0:06:34.210 ********* 2025-05-28 19:03:53.324744 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:53.325440 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:03:53.325821 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:03:53.328017 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:03:53.329050 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:03:53.329972 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:03:53.330407 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:03:53.331393 | orchestrator | 2025-05-28 19:03:53.332516 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-28 19:03:53.332872 | orchestrator | Wednesday 28 May 2025 19:03:53 +0000 (0:00:01.053) 0:06:35.264 ********* 2025-05-28 19:03:54.414433 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:54.415023 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:03:54.415973 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:03:54.417253 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:03:54.418274 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:03:54.418955 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:03:54.419763 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:03:54.420628 | orchestrator | 2025-05-28 19:03:54.421002 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-28 19:03:54.421266 | orchestrator | Wednesday 28 May 2025 19:03:54 +0000 (0:00:01.090) 0:06:36.355 ********* 2025-05-28 19:03:55.715587 | orchestrator | ok: [testbed-manager] 2025-05-28 19:03:55.715771 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:03:55.719643 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:03:55.719691 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:03:55.719705 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:03:55.719716 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:03:55.719728 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:03:55.719740 | orchestrator | 2025-05-28 19:03:55.719753 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-28 19:03:55.719766 | orchestrator | Wednesday 28 May 2025 19:03:55 +0000 (0:00:01.298) 0:06:37.654 ********* 2025-05-28 19:03:56.871438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:03:56.873992 | orchestrator | 2025-05-28 19:03:56.874103 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 19:03:56.874120 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.874) 0:06:38.528 ********* 2025-05-28 19:03:56.874132 | orchestrator | 2025-05-28 19:03:56.874475 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 19:03:56.875116 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.037) 0:06:38.566 ********* 2025-05-28 19:03:56.876153 | orchestrator | 2025-05-28 19:03:56.879263 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 19:03:56.879315 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.044) 0:06:38.610 ********* 2025-05-28 19:03:56.879334 | orchestrator | 2025-05-28 19:03:56.879459 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 19:03:56.880819 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.037) 0:06:38.648 ********* 2025-05-28 19:03:56.882122 | orchestrator | 2025-05-28 19:03:56.883369 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 19:03:56.885059 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.037) 0:06:38.685 ********* 2025-05-28 19:03:56.887118 | orchestrator | 2025-05-28 19:03:56.888717 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 19:03:56.890111 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.044) 0:06:38.729 ********* 2025-05-28 19:03:56.891102 | orchestrator | 2025-05-28 19:03:56.892124 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-28 19:03:56.893284 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.037) 0:06:38.767 ********* 2025-05-28 19:03:56.894430 | orchestrator | 2025-05-28 19:03:56.895277 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-28 19:03:56.896270 | orchestrator | Wednesday 28 May 2025 19:03:56 +0000 (0:00:00.041) 0:06:38.808 ********* 2025-05-28 19:03:57.858375 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:03:57.858501 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:03:57.859187 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:03:57.859518 | orchestrator | 2025-05-28 19:03:57.860068 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-28 19:03:57.860454 | orchestrator | Wednesday 28 May 2025 19:03:57 +0000 (0:00:00.988) 0:06:39.797 ********* 2025-05-28 19:03:59.332103 | orchestrator | changed: [testbed-manager] 2025-05-28 19:03:59.332526 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:03:59.333136 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:03:59.334582 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:03:59.338675 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:03:59.338986 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:03:59.339915 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:03:59.340592 | orchestrator | 2025-05-28 19:03:59.341353 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-28 19:03:59.342109 | orchestrator | Wednesday 28 May 2025 19:03:59 +0000 (0:00:01.467) 0:06:41.264 ********* 2025-05-28 19:04:00.416433 | orchestrator | changed: [testbed-manager] 2025-05-28 19:04:00.417303 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:00.418724 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:00.419718 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:00.420996 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:00.422072 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:00.423645 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:00.423948 | orchestrator | 2025-05-28 19:04:00.425182 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-28 19:04:00.425595 | orchestrator | Wednesday 28 May 2025 19:04:00 +0000 (0:00:01.090) 0:06:42.355 ********* 2025-05-28 19:04:00.556483 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:02.390146 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:02.390436 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:02.391067 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:02.391900 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:02.392385 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:02.393170 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:02.393837 | orchestrator | 2025-05-28 19:04:02.394301 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-28 19:04:02.394819 | orchestrator | Wednesday 28 May 2025 19:04:02 +0000 (0:00:01.975) 0:06:44.330 ********* 2025-05-28 19:04:02.493585 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:02.494238 | orchestrator | 2025-05-28 19:04:02.494725 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-28 19:04:02.495399 | orchestrator | Wednesday 28 May 2025 19:04:02 +0000 (0:00:00.102) 0:06:44.433 ********* 2025-05-28 19:04:03.569435 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:03.569516 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:03.569531 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:03.569542 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:03.569554 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:03.569629 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:03.569644 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:03.569746 | orchestrator | 2025-05-28 19:04:03.570520 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-28 19:04:03.570691 | orchestrator | Wednesday 28 May 2025 19:04:03 +0000 (0:00:01.074) 0:06:45.508 ********* 2025-05-28 19:04:03.715834 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:03.777963 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:03.865588 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:03.933704 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:03.999148 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:04.316080 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:04.317662 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:04.318320 | orchestrator | 2025-05-28 19:04:04.324837 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-28 19:04:04.324896 | orchestrator | Wednesday 28 May 2025 19:04:04 +0000 (0:00:00.748) 0:06:46.257 ********* 2025-05-28 19:04:05.207210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:04:05.208432 | orchestrator | 2025-05-28 19:04:05.211082 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-28 19:04:05.211118 | orchestrator | Wednesday 28 May 2025 19:04:05 +0000 (0:00:00.888) 0:06:47.146 ********* 2025-05-28 19:04:05.636909 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:06.041835 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:06.043077 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:06.047582 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:06.047959 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:06.048807 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:06.049013 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:06.050500 | orchestrator | 2025-05-28 19:04:06.051033 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-28 19:04:06.051863 | orchestrator | Wednesday 28 May 2025 19:04:06 +0000 (0:00:00.835) 0:06:47.981 ********* 2025-05-28 19:04:08.523132 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-28 19:04:08.526202 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-28 19:04:08.527670 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-28 19:04:08.528971 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-28 19:04:08.529839 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-28 19:04:08.530959 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-28 19:04:08.532847 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-28 19:04:08.533116 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-28 19:04:08.535192 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-28 19:04:08.535660 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-28 19:04:08.538104 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-28 19:04:08.539075 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-28 19:04:08.541293 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-28 19:04:08.541817 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-28 19:04:08.542156 | orchestrator | 2025-05-28 19:04:08.542517 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-28 19:04:08.543236 | orchestrator | Wednesday 28 May 2025 19:04:08 +0000 (0:00:02.481) 0:06:50.463 ********* 2025-05-28 19:04:08.678262 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:08.746278 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:08.815646 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:08.879276 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:08.942391 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:09.048035 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:09.048142 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:09.048800 | orchestrator | 2025-05-28 19:04:09.050089 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-28 19:04:09.052253 | orchestrator | Wednesday 28 May 2025 19:04:09 +0000 (0:00:00.526) 0:06:50.990 ********* 2025-05-28 19:04:09.855527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:04:09.856043 | orchestrator | 2025-05-28 19:04:09.857134 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-28 19:04:09.860489 | orchestrator | Wednesday 28 May 2025 19:04:09 +0000 (0:00:00.804) 0:06:51.794 ********* 2025-05-28 19:04:10.281156 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:10.674100 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:10.674285 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:10.674999 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:10.675322 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:10.675967 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:10.676572 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:10.679071 | orchestrator | 2025-05-28 19:04:10.679114 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-28 19:04:10.679131 | orchestrator | Wednesday 28 May 2025 19:04:10 +0000 (0:00:00.817) 0:06:52.612 ********* 2025-05-28 19:04:11.189195 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:11.258602 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:11.336852 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:11.710141 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:11.710305 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:11.713312 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:11.713973 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:11.716823 | orchestrator | 2025-05-28 19:04:11.717231 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-28 19:04:11.718231 | orchestrator | Wednesday 28 May 2025 19:04:11 +0000 (0:00:01.035) 0:06:53.648 ********* 2025-05-28 19:04:11.841162 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:11.904362 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:11.964625 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:12.035755 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:12.113428 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:12.220458 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:12.221273 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:12.221984 | orchestrator | 2025-05-28 19:04:12.223056 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-28 19:04:12.224226 | orchestrator | Wednesday 28 May 2025 19:04:12 +0000 (0:00:00.510) 0:06:54.159 ********* 2025-05-28 19:04:13.437786 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:13.438064 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:13.438961 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:13.439479 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:13.440762 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:13.440998 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:13.441450 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:13.442078 | orchestrator | 2025-05-28 19:04:13.443183 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-28 19:04:13.443273 | orchestrator | Wednesday 28 May 2025 19:04:13 +0000 (0:00:01.219) 0:06:55.378 ********* 2025-05-28 19:04:13.561008 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:13.630698 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:13.698828 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:13.759147 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:13.823273 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:13.934606 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:13.934763 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:13.936219 | orchestrator | 2025-05-28 19:04:13.937154 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-28 19:04:13.938104 | orchestrator | Wednesday 28 May 2025 19:04:13 +0000 (0:00:00.495) 0:06:55.874 ********* 2025-05-28 19:04:15.488867 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:15.489112 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:15.491584 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:15.491612 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:15.491743 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:15.492434 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:15.492749 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:15.493415 | orchestrator | 2025-05-28 19:04:15.494452 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-28 19:04:15.494599 | orchestrator | Wednesday 28 May 2025 19:04:15 +0000 (0:00:01.552) 0:06:57.427 ********* 2025-05-28 19:04:16.978769 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:16.979256 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:16.980180 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:16.981555 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:16.983695 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:16.984709 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:16.985832 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:16.986850 | orchestrator | 2025-05-28 19:04:16.987776 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-28 19:04:16.988608 | orchestrator | Wednesday 28 May 2025 19:04:16 +0000 (0:00:01.492) 0:06:58.919 ********* 2025-05-28 19:04:18.625612 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:18.626909 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:18.627466 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:18.628411 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:18.628859 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:18.629202 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:18.629720 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:18.630313 | orchestrator | 2025-05-28 19:04:18.630865 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-28 19:04:18.631138 | orchestrator | Wednesday 28 May 2025 19:04:18 +0000 (0:00:01.645) 0:07:00.564 ********* 2025-05-28 19:04:20.278618 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:20.278847 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:20.279872 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:20.283647 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:20.283763 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:20.283779 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:20.283790 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:20.283802 | orchestrator | 2025-05-28 19:04:20.283815 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-28 19:04:20.284081 | orchestrator | Wednesday 28 May 2025 19:04:20 +0000 (0:00:01.652) 0:07:02.217 ********* 2025-05-28 19:04:20.987250 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:21.382726 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:21.382897 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:21.383219 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:21.383503 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:21.384541 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:21.385082 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:21.385940 | orchestrator | 2025-05-28 19:04:21.386189 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-28 19:04:21.386611 | orchestrator | Wednesday 28 May 2025 19:04:21 +0000 (0:00:01.104) 0:07:03.321 ********* 2025-05-28 19:04:21.534699 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:21.619988 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:21.693035 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:21.758194 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:21.835863 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:22.251337 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:22.252176 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:22.252500 | orchestrator | 2025-05-28 19:04:22.256939 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-28 19:04:22.256992 | orchestrator | Wednesday 28 May 2025 19:04:22 +0000 (0:00:00.869) 0:07:04.191 ********* 2025-05-28 19:04:22.393334 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:22.464814 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:22.530194 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:22.612826 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:22.695377 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:22.803224 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:22.803585 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:22.805151 | orchestrator | 2025-05-28 19:04:22.806258 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-28 19:04:22.807127 | orchestrator | Wednesday 28 May 2025 19:04:22 +0000 (0:00:00.551) 0:07:04.743 ********* 2025-05-28 19:04:22.968871 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:23.042690 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:23.114504 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:23.194179 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:23.262073 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:23.367279 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:23.367473 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:23.368062 | orchestrator | 2025-05-28 19:04:23.369017 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-28 19:04:23.369682 | orchestrator | Wednesday 28 May 2025 19:04:23 +0000 (0:00:00.563) 0:07:05.307 ********* 2025-05-28 19:04:23.506122 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:23.569459 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:23.828328 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:23.903529 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:23.972119 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:24.095403 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:24.096184 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:24.097093 | orchestrator | 2025-05-28 19:04:24.098152 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-28 19:04:24.103787 | orchestrator | Wednesday 28 May 2025 19:04:24 +0000 (0:00:00.728) 0:07:06.035 ********* 2025-05-28 19:04:24.245749 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:24.318940 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:24.383498 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:24.450434 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:24.521130 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:24.637451 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:24.637598 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:24.638752 | orchestrator | 2025-05-28 19:04:24.642652 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-28 19:04:24.642885 | orchestrator | Wednesday 28 May 2025 19:04:24 +0000 (0:00:00.542) 0:07:06.577 ********* 2025-05-28 19:04:30.346262 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:30.346388 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:30.346476 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:30.347259 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:30.348283 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:30.349129 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:30.350882 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:30.350949 | orchestrator | 2025-05-28 19:04:30.351586 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-28 19:04:30.352438 | orchestrator | Wednesday 28 May 2025 19:04:30 +0000 (0:00:05.707) 0:07:12.285 ********* 2025-05-28 19:04:30.578681 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:30.677041 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:30.745482 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:30.810789 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:30.941881 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:30.942357 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:30.944308 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:30.944526 | orchestrator | 2025-05-28 19:04:30.945050 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-28 19:04:30.945760 | orchestrator | Wednesday 28 May 2025 19:04:30 +0000 (0:00:00.596) 0:07:12.881 ********* 2025-05-28 19:04:31.966181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:04:31.966281 | orchestrator | 2025-05-28 19:04:31.967614 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-28 19:04:31.968099 | orchestrator | Wednesday 28 May 2025 19:04:31 +0000 (0:00:01.023) 0:07:13.905 ********* 2025-05-28 19:04:33.759395 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:33.759644 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:33.760610 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:33.761496 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:33.763207 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:33.764061 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:33.764603 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:33.765054 | orchestrator | 2025-05-28 19:04:33.765813 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-28 19:04:33.766283 | orchestrator | Wednesday 28 May 2025 19:04:33 +0000 (0:00:01.794) 0:07:15.699 ********* 2025-05-28 19:04:34.889995 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:34.890358 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:34.890860 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:34.891539 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:34.892181 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:34.892752 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:34.893154 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:34.894087 | orchestrator | 2025-05-28 19:04:34.894118 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-28 19:04:34.894311 | orchestrator | Wednesday 28 May 2025 19:04:34 +0000 (0:00:01.129) 0:07:16.829 ********* 2025-05-28 19:04:35.405442 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:35.864815 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:35.865174 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:35.867722 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:35.872624 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:35.872695 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:35.872709 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:35.872721 | orchestrator | 2025-05-28 19:04:35.872735 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-28 19:04:35.872747 | orchestrator | Wednesday 28 May 2025 19:04:35 +0000 (0:00:00.975) 0:07:17.805 ********* 2025-05-28 19:04:37.951683 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 19:04:37.952177 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 19:04:37.952811 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 19:04:37.954968 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 19:04:37.956649 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 19:04:37.959383 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 19:04:37.960689 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-28 19:04:37.960991 | orchestrator | 2025-05-28 19:04:37.962674 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-28 19:04:37.962852 | orchestrator | Wednesday 28 May 2025 19:04:37 +0000 (0:00:02.086) 0:07:19.891 ********* 2025-05-28 19:04:38.807118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:04:38.807310 | orchestrator | 2025-05-28 19:04:38.810967 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-28 19:04:38.811009 | orchestrator | Wednesday 28 May 2025 19:04:38 +0000 (0:00:00.854) 0:07:20.745 ********* 2025-05-28 19:04:47.728366 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:47.728484 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:47.729039 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:47.729930 | orchestrator | changed: [testbed-manager] 2025-05-28 19:04:47.733062 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:47.733141 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:47.733155 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:47.733212 | orchestrator | 2025-05-28 19:04:47.734509 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-28 19:04:47.735686 | orchestrator | Wednesday 28 May 2025 19:04:47 +0000 (0:00:08.919) 0:07:29.664 ********* 2025-05-28 19:04:50.153968 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:50.154082 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:50.154151 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:50.154661 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:50.155100 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:50.155611 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:50.159756 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:50.159791 | orchestrator | 2025-05-28 19:04:50.159823 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-28 19:04:50.159861 | orchestrator | Wednesday 28 May 2025 19:04:50 +0000 (0:00:02.428) 0:07:32.093 ********* 2025-05-28 19:04:51.449683 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:51.450280 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:51.453016 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:51.454412 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:51.456450 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:51.456481 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:51.457450 | orchestrator | 2025-05-28 19:04:51.458535 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-28 19:04:51.458790 | orchestrator | Wednesday 28 May 2025 19:04:51 +0000 (0:00:01.294) 0:07:33.387 ********* 2025-05-28 19:04:52.713461 | orchestrator | changed: [testbed-manager] 2025-05-28 19:04:52.716832 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:52.716865 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:52.716878 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:52.717947 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:52.718156 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:52.718541 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:52.719071 | orchestrator | 2025-05-28 19:04:52.719642 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-28 19:04:52.719997 | orchestrator | 2025-05-28 19:04:52.720516 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-28 19:04:52.721246 | orchestrator | Wednesday 28 May 2025 19:04:52 +0000 (0:00:01.263) 0:07:34.651 ********* 2025-05-28 19:04:53.047396 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:53.107393 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:53.191043 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:53.252966 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:53.318621 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:53.448205 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:53.448538 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:53.448935 | orchestrator | 2025-05-28 19:04:53.449588 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-28 19:04:53.453887 | orchestrator | 2025-05-28 19:04:53.453952 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-28 19:04:53.453965 | orchestrator | Wednesday 28 May 2025 19:04:53 +0000 (0:00:00.735) 0:07:35.387 ********* 2025-05-28 19:04:54.755045 | orchestrator | changed: [testbed-manager] 2025-05-28 19:04:54.755853 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:54.757019 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:54.758377 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:54.759266 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:54.760128 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:54.761536 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:54.763697 | orchestrator | 2025-05-28 19:04:54.764789 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-28 19:04:54.765612 | orchestrator | Wednesday 28 May 2025 19:04:54 +0000 (0:00:01.306) 0:07:36.693 ********* 2025-05-28 19:04:56.207546 | orchestrator | ok: [testbed-manager] 2025-05-28 19:04:56.207793 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:04:56.211847 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:04:56.212002 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:04:56.212020 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:04:56.212032 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:04:56.213046 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:04:56.213946 | orchestrator | 2025-05-28 19:04:56.215206 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-28 19:04:56.215621 | orchestrator | Wednesday 28 May 2025 19:04:56 +0000 (0:00:01.453) 0:07:38.146 ********* 2025-05-28 19:04:56.349998 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:04:56.420799 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:04:56.487559 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:04:56.740150 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:04:56.802331 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:04:57.242714 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:04:57.243486 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:04:57.244650 | orchestrator | 2025-05-28 19:04:57.246160 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-28 19:04:57.247202 | orchestrator | Wednesday 28 May 2025 19:04:57 +0000 (0:00:01.035) 0:07:39.182 ********* 2025-05-28 19:04:58.556812 | orchestrator | changed: [testbed-manager] 2025-05-28 19:04:58.557743 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:04:58.560884 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:04:58.562605 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:04:58.562949 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:04:58.564192 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:04:58.565448 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:04:58.566083 | orchestrator | 2025-05-28 19:04:58.570122 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-28 19:04:58.570735 | orchestrator | 2025-05-28 19:04:58.572574 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-28 19:04:58.574257 | orchestrator | Wednesday 28 May 2025 19:04:58 +0000 (0:00:01.314) 0:07:40.497 ********* 2025-05-28 19:04:59.432647 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:04:59.433413 | orchestrator | 2025-05-28 19:04:59.434551 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-28 19:04:59.435210 | orchestrator | Wednesday 28 May 2025 19:04:59 +0000 (0:00:00.873) 0:07:41.371 ********* 2025-05-28 19:04:59.975296 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:00.059299 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:00.138470 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:00.591959 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:00.592094 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:00.592120 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:00.593240 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:00.594756 | orchestrator | 2025-05-28 19:05:00.595596 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-28 19:05:00.596109 | orchestrator | Wednesday 28 May 2025 19:05:00 +0000 (0:00:01.159) 0:07:42.530 ********* 2025-05-28 19:05:01.774356 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:01.774818 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:05:01.775308 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:05:01.778389 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:05:01.781178 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:05:01.784219 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:05:01.784568 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:05:01.787314 | orchestrator | 2025-05-28 19:05:01.787742 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-28 19:05:01.788113 | orchestrator | Wednesday 28 May 2025 19:05:01 +0000 (0:00:01.176) 0:07:43.706 ********* 2025-05-28 19:05:02.716719 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:05:02.716994 | orchestrator | 2025-05-28 19:05:02.717266 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-28 19:05:02.717638 | orchestrator | Wednesday 28 May 2025 19:05:02 +0000 (0:00:00.947) 0:07:44.654 ********* 2025-05-28 19:05:03.143054 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:03.804752 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:03.804975 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:03.806138 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:03.806488 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:03.807883 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:03.808841 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:03.810130 | orchestrator | 2025-05-28 19:05:03.810889 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-28 19:05:03.811548 | orchestrator | Wednesday 28 May 2025 19:05:03 +0000 (0:00:01.090) 0:07:45.745 ********* 2025-05-28 19:05:04.259150 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:04.996520 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:05:04.999337 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:05:05.000413 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:05:05.001429 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:05:05.002358 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:05:05.003358 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:05:05.004957 | orchestrator | 2025-05-28 19:05:05.006237 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:05:05.006585 | orchestrator | 2025-05-28 19:05:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:05:05.006644 | orchestrator | 2025-05-28 19:05:05 | INFO  | Please wait and do not abort execution. 2025-05-28 19:05:05.007438 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-28 19:05:05.008137 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 19:05:05.008881 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 19:05:05.010174 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 19:05:05.011010 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-28 19:05:05.011224 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 19:05:05.011769 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-28 19:05:05.012419 | orchestrator | 2025-05-28 19:05:05.013211 | orchestrator | Wednesday 28 May 2025 19:05:04 +0000 (0:00:01.191) 0:07:46.937 ********* 2025-05-28 19:05:05.014062 | orchestrator | =============================================================================== 2025-05-28 19:05:05.014956 | orchestrator | osism.commons.packages : Install required packages --------------------- 82.53s 2025-05-28 19:05:05.015433 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.55s 2025-05-28 19:05:05.016447 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.49s 2025-05-28 19:05:05.016826 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.89s 2025-05-28 19:05:05.017329 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.87s 2025-05-28 19:05:05.017726 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.49s 2025-05-28 19:05:05.018209 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.48s 2025-05-28 19:05:05.018641 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.20s 2025-05-28 19:05:05.019212 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.92s 2025-05-28 19:05:05.019450 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.70s 2025-05-28 19:05:05.020558 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.45s 2025-05-28 19:05:05.020613 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.52s 2025-05-28 19:05:05.020720 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 6.86s 2025-05-28 19:05:05.021235 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.66s 2025-05-28 19:05:05.021343 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 6.56s 2025-05-28 19:05:05.022061 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.77s 2025-05-28 19:05:05.022338 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.71s 2025-05-28 19:05:05.022776 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.71s 2025-05-28 19:05:05.023054 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.31s 2025-05-28 19:05:05.023334 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.20s 2025-05-28 19:05:05.755500 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-28 19:05:05.755602 | orchestrator | + osism apply network 2025-05-28 19:05:07.966172 | orchestrator | 2025-05-28 19:05:07 | INFO  | Task 12ef079e-72f2-47a8-afdb-70f32ddaea0c (network) was prepared for execution. 2025-05-28 19:05:07.966269 | orchestrator | 2025-05-28 19:05:07 | INFO  | It takes a moment until task 12ef079e-72f2-47a8-afdb-70f32ddaea0c (network) has been started and output is visible here. 2025-05-28 19:05:11.414807 | orchestrator | 2025-05-28 19:05:11.415088 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-28 19:05:11.415194 | orchestrator | 2025-05-28 19:05:11.415696 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-28 19:05:11.416077 | orchestrator | Wednesday 28 May 2025 19:05:11 +0000 (0:00:00.205) 0:00:00.205 ********* 2025-05-28 19:05:11.561171 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:11.640384 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:11.716729 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:11.795607 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:11.881553 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:12.131948 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:12.132118 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:12.133183 | orchestrator | 2025-05-28 19:05:12.134110 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-28 19:05:12.134824 | orchestrator | Wednesday 28 May 2025 19:05:12 +0000 (0:00:00.720) 0:00:00.926 ********* 2025-05-28 19:05:13.444774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:05:13.445288 | orchestrator | 2025-05-28 19:05:13.446333 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-28 19:05:13.448007 | orchestrator | Wednesday 28 May 2025 19:05:13 +0000 (0:00:01.312) 0:00:02.238 ********* 2025-05-28 19:05:15.126075 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:15.126183 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:15.126539 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:15.127176 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:15.128457 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:15.128931 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:15.129530 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:15.132090 | orchestrator | 2025-05-28 19:05:15.132127 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-28 19:05:15.132142 | orchestrator | Wednesday 28 May 2025 19:05:15 +0000 (0:00:01.679) 0:00:03.918 ********* 2025-05-28 19:05:16.997706 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:16.998400 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:16.999033 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:17.000343 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:17.000367 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:17.000906 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:17.000999 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:17.001549 | orchestrator | 2025-05-28 19:05:17.002970 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-28 19:05:17.003314 | orchestrator | Wednesday 28 May 2025 19:05:16 +0000 (0:00:01.871) 0:00:05.790 ********* 2025-05-28 19:05:17.544812 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-28 19:05:17.545457 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-28 19:05:17.545984 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-28 19:05:18.179686 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-28 19:05:18.180030 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-28 19:05:18.181944 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-28 19:05:18.181972 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-28 19:05:18.182427 | orchestrator | 2025-05-28 19:05:18.184290 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-28 19:05:18.184386 | orchestrator | Wednesday 28 May 2025 19:05:18 +0000 (0:00:01.182) 0:00:06.972 ********* 2025-05-28 19:05:19.975681 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:05:19.975787 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 19:05:19.975877 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:05:19.976037 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 19:05:19.976388 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 19:05:19.979268 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 19:05:19.980281 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 19:05:19.981068 | orchestrator | 2025-05-28 19:05:19.981960 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-28 19:05:19.982547 | orchestrator | Wednesday 28 May 2025 19:05:19 +0000 (0:00:01.799) 0:00:08.771 ********* 2025-05-28 19:05:20.870434 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:21.871854 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:05:21.875147 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:05:21.875185 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:05:21.875197 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:05:21.875208 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:05:21.876040 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:05:21.877225 | orchestrator | 2025-05-28 19:05:21.877489 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-28 19:05:21.878198 | orchestrator | Wednesday 28 May 2025 19:05:21 +0000 (0:00:01.891) 0:00:10.663 ********* 2025-05-28 19:05:22.362511 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:05:22.968043 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 19:05:22.968141 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:05:22.969283 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 19:05:22.970455 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 19:05:22.971666 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 19:05:22.972317 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 19:05:22.973453 | orchestrator | 2025-05-28 19:05:22.973863 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-28 19:05:22.974473 | orchestrator | Wednesday 28 May 2025 19:05:22 +0000 (0:00:01.101) 0:00:11.764 ********* 2025-05-28 19:05:23.436834 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:23.523329 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:24.167190 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:24.168816 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:24.171671 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:24.173174 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:24.174455 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:24.175212 | orchestrator | 2025-05-28 19:05:24.176158 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-28 19:05:24.176622 | orchestrator | Wednesday 28 May 2025 19:05:24 +0000 (0:00:01.192) 0:00:12.957 ********* 2025-05-28 19:05:24.366119 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:05:24.457596 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:05:24.565633 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:05:24.651950 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:05:24.745285 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:05:25.224060 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:05:25.224274 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:05:25.225938 | orchestrator | 2025-05-28 19:05:25.227270 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-28 19:05:25.228255 | orchestrator | Wednesday 28 May 2025 19:05:25 +0000 (0:00:01.057) 0:00:14.015 ********* 2025-05-28 19:05:27.237742 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:27.238871 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:27.242344 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:27.242384 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:27.242396 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:27.243056 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:27.244606 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:27.244628 | orchestrator | 2025-05-28 19:05:27.246097 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-28 19:05:27.246664 | orchestrator | Wednesday 28 May 2025 19:05:27 +0000 (0:00:02.018) 0:00:16.033 ********* 2025-05-28 19:05:29.228040 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-28 19:05:29.228163 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-28 19:05:29.228935 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-28 19:05:29.233706 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-28 19:05:29.234199 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-28 19:05:29.235671 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-28 19:05:29.236567 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-28 19:05:29.237640 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-28 19:05:29.237918 | orchestrator | 2025-05-28 19:05:29.239788 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-28 19:05:29.240521 | orchestrator | Wednesday 28 May 2025 19:05:29 +0000 (0:00:01.986) 0:00:18.020 ********* 2025-05-28 19:05:30.769426 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:30.770162 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:05:30.771438 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:05:30.772292 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:05:30.773956 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:05:30.774152 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:05:30.775294 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:05:30.775565 | orchestrator | 2025-05-28 19:05:30.777082 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-28 19:05:30.777423 | orchestrator | Wednesday 28 May 2025 19:05:30 +0000 (0:00:01.545) 0:00:19.565 ********* 2025-05-28 19:05:32.336928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:05:32.337808 | orchestrator | 2025-05-28 19:05:32.341231 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-28 19:05:32.341262 | orchestrator | Wednesday 28 May 2025 19:05:32 +0000 (0:00:01.564) 0:00:21.129 ********* 2025-05-28 19:05:33.328387 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:33.329353 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:33.332577 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:33.334209 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:33.335679 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:33.337084 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:33.337853 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:33.339448 | orchestrator | 2025-05-28 19:05:33.340240 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-28 19:05:33.341477 | orchestrator | Wednesday 28 May 2025 19:05:33 +0000 (0:00:00.991) 0:00:22.121 ********* 2025-05-28 19:05:33.495716 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:33.582778 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:05:33.855238 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:05:33.942774 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:05:34.038317 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:05:34.190607 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:05:34.191276 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:05:34.191958 | orchestrator | 2025-05-28 19:05:34.195318 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-28 19:05:34.195368 | orchestrator | Wednesday 28 May 2025 19:05:34 +0000 (0:00:00.861) 0:00:22.982 ********* 2025-05-28 19:05:34.641215 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 19:05:34.641675 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 19:05:34.731338 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 19:05:35.216811 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 19:05:35.217084 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 19:05:35.218225 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 19:05:35.219005 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 19:05:35.219506 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 19:05:35.219953 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 19:05:35.220576 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 19:05:35.220938 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 19:05:35.221544 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 19:05:35.221983 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-28 19:05:35.223864 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-28 19:05:35.223969 | orchestrator | 2025-05-28 19:05:35.224043 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-28 19:05:35.224131 | orchestrator | Wednesday 28 May 2025 19:05:35 +0000 (0:00:01.027) 0:00:24.010 ********* 2025-05-28 19:05:35.560707 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:05:35.645381 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:05:35.729292 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:05:35.813221 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:05:35.899772 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:05:37.119504 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:05:37.120377 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:05:37.121277 | orchestrator | 2025-05-28 19:05:37.122833 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-28 19:05:37.123935 | orchestrator | Wednesday 28 May 2025 19:05:37 +0000 (0:00:01.902) 0:00:25.912 ********* 2025-05-28 19:05:37.286208 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:05:37.368727 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:05:37.667917 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:05:37.753461 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:05:37.842279 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:05:37.882629 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:05:37.883279 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:05:37.883982 | orchestrator | 2025-05-28 19:05:37.884046 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:05:37.884933 | orchestrator | 2025-05-28 19:05:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:05:37.884960 | orchestrator | 2025-05-28 19:05:37 | INFO  | Please wait and do not abort execution. 2025-05-28 19:05:37.885456 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:05:37.886083 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:05:37.886380 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:05:37.887169 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:05:37.887415 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:05:37.888118 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:05:37.888740 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:05:37.889351 | orchestrator | 2025-05-28 19:05:37.890194 | orchestrator | Wednesday 28 May 2025 19:05:37 +0000 (0:00:00.767) 0:00:26.679 ********* 2025-05-28 19:05:37.890576 | orchestrator | =============================================================================== 2025-05-28 19:05:37.893689 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.02s 2025-05-28 19:05:37.895273 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.99s 2025-05-28 19:05:37.897084 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.90s 2025-05-28 19:05:37.897371 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.89s 2025-05-28 19:05:37.898779 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.87s 2025-05-28 19:05:37.899464 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.80s 2025-05-28 19:05:37.899967 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.68s 2025-05-28 19:05:37.900618 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.56s 2025-05-28 19:05:37.901478 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.55s 2025-05-28 19:05:37.902092 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.31s 2025-05-28 19:05:37.902598 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2025-05-28 19:05:37.903531 | orchestrator | osism.commons.network : Create required directories --------------------- 1.18s 2025-05-28 19:05:37.904121 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.10s 2025-05-28 19:05:37.904645 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 1.06s 2025-05-28 19:05:37.905360 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.03s 2025-05-28 19:05:37.905977 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-05-28 19:05:37.906581 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.86s 2025-05-28 19:05:37.906687 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.77s 2025-05-28 19:05:37.907417 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.72s 2025-05-28 19:05:38.500359 | orchestrator | + osism apply wireguard 2025-05-28 19:05:40.037410 | orchestrator | 2025-05-28 19:05:40 | INFO  | Task f8407a73-8492-4f54-90ed-0f0704b1a260 (wireguard) was prepared for execution. 2025-05-28 19:05:40.037525 | orchestrator | 2025-05-28 19:05:40 | INFO  | It takes a moment until task f8407a73-8492-4f54-90ed-0f0704b1a260 (wireguard) has been started and output is visible here. 2025-05-28 19:05:43.371498 | orchestrator | 2025-05-28 19:05:43.372048 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-28 19:05:43.372627 | orchestrator | 2025-05-28 19:05:43.373294 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-28 19:05:43.375204 | orchestrator | Wednesday 28 May 2025 19:05:43 +0000 (0:00:00.168) 0:00:00.168 ********* 2025-05-28 19:05:45.058625 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:45.058761 | orchestrator | 2025-05-28 19:05:45.058779 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-28 19:05:45.058849 | orchestrator | Wednesday 28 May 2025 19:05:45 +0000 (0:00:01.686) 0:00:01.854 ********* 2025-05-28 19:05:52.004534 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:52.004660 | orchestrator | 2025-05-28 19:05:52.005008 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-28 19:05:52.005484 | orchestrator | Wednesday 28 May 2025 19:05:51 +0000 (0:00:06.948) 0:00:08.802 ********* 2025-05-28 19:05:52.570377 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:52.571043 | orchestrator | 2025-05-28 19:05:52.573381 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-28 19:05:52.573408 | orchestrator | Wednesday 28 May 2025 19:05:52 +0000 (0:00:00.567) 0:00:09.370 ********* 2025-05-28 19:05:53.032569 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:53.032834 | orchestrator | 2025-05-28 19:05:53.033641 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-28 19:05:53.035397 | orchestrator | Wednesday 28 May 2025 19:05:53 +0000 (0:00:00.461) 0:00:09.831 ********* 2025-05-28 19:05:53.550709 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:53.551032 | orchestrator | 2025-05-28 19:05:53.551537 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-28 19:05:53.552382 | orchestrator | Wednesday 28 May 2025 19:05:53 +0000 (0:00:00.517) 0:00:10.348 ********* 2025-05-28 19:05:54.084436 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:54.086296 | orchestrator | 2025-05-28 19:05:54.086338 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-28 19:05:54.086780 | orchestrator | Wednesday 28 May 2025 19:05:54 +0000 (0:00:00.534) 0:00:10.883 ********* 2025-05-28 19:05:54.503070 | orchestrator | ok: [testbed-manager] 2025-05-28 19:05:54.504540 | orchestrator | 2025-05-28 19:05:54.506601 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-28 19:05:54.507055 | orchestrator | Wednesday 28 May 2025 19:05:54 +0000 (0:00:00.418) 0:00:11.302 ********* 2025-05-28 19:05:55.747221 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:55.747932 | orchestrator | 2025-05-28 19:05:55.748946 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-28 19:05:55.749394 | orchestrator | Wednesday 28 May 2025 19:05:55 +0000 (0:00:01.242) 0:00:12.545 ********* 2025-05-28 19:05:56.651239 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-28 19:05:56.651410 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:56.653137 | orchestrator | 2025-05-28 19:05:56.653170 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-28 19:05:56.653184 | orchestrator | Wednesday 28 May 2025 19:05:56 +0000 (0:00:00.904) 0:00:13.449 ********* 2025-05-28 19:05:58.384391 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:58.384511 | orchestrator | 2025-05-28 19:05:58.385316 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-28 19:05:58.387251 | orchestrator | Wednesday 28 May 2025 19:05:58 +0000 (0:00:01.728) 0:00:15.178 ********* 2025-05-28 19:05:59.318210 | orchestrator | changed: [testbed-manager] 2025-05-28 19:05:59.318318 | orchestrator | 2025-05-28 19:05:59.318391 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:05:59.318735 | orchestrator | 2025-05-28 19:05:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:05:59.318761 | orchestrator | 2025-05-28 19:05:59 | INFO  | Please wait and do not abort execution. 2025-05-28 19:05:59.319665 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:05:59.320116 | orchestrator | 2025-05-28 19:05:59.320652 | orchestrator | Wednesday 28 May 2025 19:05:59 +0000 (0:00:00.937) 0:00:16.116 ********* 2025-05-28 19:05:59.321044 | orchestrator | =============================================================================== 2025-05-28 19:05:59.322104 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.95s 2025-05-28 19:05:59.322201 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2025-05-28 19:05:59.322512 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.69s 2025-05-28 19:05:59.323291 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.24s 2025-05-28 19:05:59.323981 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-05-28 19:05:59.324551 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2025-05-28 19:05:59.325036 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-05-28 19:05:59.325589 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-05-28 19:05:59.326187 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-05-28 19:05:59.326542 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-05-28 19:05:59.327061 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-05-28 19:05:59.865343 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-28 19:05:59.907776 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-28 19:05:59.907927 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-28 19:05:59.985936 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 190 0 --:--:-- --:--:-- --:--:-- 192 2025-05-28 19:06:00.001606 | orchestrator | + osism apply --environment custom workarounds 2025-05-28 19:06:01.449255 | orchestrator | 2025-05-28 19:06:01 | INFO  | Trying to run play workarounds in environment custom 2025-05-28 19:06:01.496943 | orchestrator | 2025-05-28 19:06:01 | INFO  | Task a4f067e1-2ead-4897-9768-9471421373fc (workarounds) was prepared for execution. 2025-05-28 19:06:01.497039 | orchestrator | 2025-05-28 19:06:01 | INFO  | It takes a moment until task a4f067e1-2ead-4897-9768-9471421373fc (workarounds) has been started and output is visible here. 2025-05-28 19:06:04.628446 | orchestrator | 2025-05-28 19:06:04.629304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:06:04.629343 | orchestrator | 2025-05-28 19:06:04.629371 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-28 19:06:04.629396 | orchestrator | Wednesday 28 May 2025 19:06:04 +0000 (0:00:00.146) 0:00:00.146 ********* 2025-05-28 19:06:04.795064 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-28 19:06:04.885314 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-28 19:06:04.985662 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-28 19:06:05.090085 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-28 19:06:05.174215 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-28 19:06:05.482985 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-28 19:06:05.483227 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-28 19:06:05.484219 | orchestrator | 2025-05-28 19:06:05.484937 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-28 19:06:05.485897 | orchestrator | 2025-05-28 19:06:05.486598 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-28 19:06:05.487947 | orchestrator | Wednesday 28 May 2025 19:06:05 +0000 (0:00:00.852) 0:00:00.998 ********* 2025-05-28 19:06:08.251216 | orchestrator | ok: [testbed-manager] 2025-05-28 19:06:08.252747 | orchestrator | 2025-05-28 19:06:08.255301 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-28 19:06:08.258588 | orchestrator | 2025-05-28 19:06:08.260517 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-28 19:06:08.261330 | orchestrator | Wednesday 28 May 2025 19:06:08 +0000 (0:00:02.770) 0:00:03.768 ********* 2025-05-28 19:06:10.307678 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:06:10.308220 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:06:10.309197 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:06:10.310622 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:06:10.312610 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:06:10.313433 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:06:10.314092 | orchestrator | 2025-05-28 19:06:10.314933 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-28 19:06:10.315479 | orchestrator | 2025-05-28 19:06:10.316359 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-28 19:06:10.316461 | orchestrator | Wednesday 28 May 2025 19:06:10 +0000 (0:00:02.055) 0:00:05.823 ********* 2025-05-28 19:06:11.894401 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 19:06:11.895359 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 19:06:11.895898 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 19:06:11.897650 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 19:06:11.899035 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 19:06:11.899062 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-28 19:06:11.899598 | orchestrator | 2025-05-28 19:06:11.900498 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-28 19:06:11.901076 | orchestrator | Wednesday 28 May 2025 19:06:11 +0000 (0:00:01.588) 0:00:07.412 ********* 2025-05-28 19:06:15.791220 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:06:15.791399 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:06:15.791847 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:06:15.792488 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:06:15.794104 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:06:15.795618 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:06:15.795660 | orchestrator | 2025-05-28 19:06:15.799237 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-28 19:06:15.799344 | orchestrator | Wednesday 28 May 2025 19:06:15 +0000 (0:00:03.899) 0:00:11.312 ********* 2025-05-28 19:06:15.971326 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:06:16.052354 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:06:16.131238 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:06:16.406704 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:06:16.557949 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:06:16.558392 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:06:16.558582 | orchestrator | 2025-05-28 19:06:16.559182 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-28 19:06:16.559435 | orchestrator | 2025-05-28 19:06:16.560113 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-28 19:06:16.560226 | orchestrator | Wednesday 28 May 2025 19:06:16 +0000 (0:00:00.769) 0:00:12.081 ********* 2025-05-28 19:06:18.255545 | orchestrator | changed: [testbed-manager] 2025-05-28 19:06:18.255657 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:06:18.256015 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:06:18.256480 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:06:18.256903 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:06:18.257123 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:06:18.258995 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:06:18.259021 | orchestrator | 2025-05-28 19:06:18.261035 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-28 19:06:18.261445 | orchestrator | Wednesday 28 May 2025 19:06:18 +0000 (0:00:01.695) 0:00:13.776 ********* 2025-05-28 19:06:20.042405 | orchestrator | changed: [testbed-manager] 2025-05-28 19:06:20.042894 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:06:20.046419 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:06:20.046513 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:06:20.046540 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:06:20.046554 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:06:20.046565 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:06:20.046577 | orchestrator | 2025-05-28 19:06:20.046644 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-28 19:06:20.047224 | orchestrator | Wednesday 28 May 2025 19:06:20 +0000 (0:00:01.781) 0:00:15.557 ********* 2025-05-28 19:06:21.646170 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:06:21.649678 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:06:21.650581 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:06:21.651964 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:06:21.654000 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:06:21.654692 | orchestrator | ok: [testbed-manager] 2025-05-28 19:06:21.656199 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:06:21.657297 | orchestrator | 2025-05-28 19:06:21.659072 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-28 19:06:21.659818 | orchestrator | Wednesday 28 May 2025 19:06:21 +0000 (0:00:01.606) 0:00:17.163 ********* 2025-05-28 19:06:23.393597 | orchestrator | changed: [testbed-manager] 2025-05-28 19:06:23.393734 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:06:23.394892 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:06:23.395631 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:06:23.396337 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:06:23.397664 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:06:23.397689 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:06:23.399301 | orchestrator | 2025-05-28 19:06:23.399345 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-28 19:06:23.399358 | orchestrator | Wednesday 28 May 2025 19:06:23 +0000 (0:00:01.750) 0:00:18.914 ********* 2025-05-28 19:06:23.571222 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:06:23.662916 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:06:23.750200 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:06:23.827590 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:06:24.102554 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:06:24.245341 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:06:24.245502 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:06:24.246003 | orchestrator | 2025-05-28 19:06:24.247439 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-28 19:06:24.253323 | orchestrator | 2025-05-28 19:06:24.253821 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-28 19:06:24.255627 | orchestrator | Wednesday 28 May 2025 19:06:24 +0000 (0:00:00.853) 0:00:19.767 ********* 2025-05-28 19:06:26.521430 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:06:26.521566 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:06:26.521960 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:06:26.522643 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:06:26.523400 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:06:26.524159 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:06:26.525783 | orchestrator | ok: [testbed-manager] 2025-05-28 19:06:26.526428 | orchestrator | 2025-05-28 19:06:26.527092 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:06:26.527722 | orchestrator | 2025-05-28 19:06:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:06:26.528217 | orchestrator | 2025-05-28 19:06:26 | INFO  | Please wait and do not abort execution. 2025-05-28 19:06:26.529251 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:06:26.529333 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:26.529976 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:26.530599 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:26.530676 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:26.531329 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:26.531472 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:26.531933 | orchestrator | 2025-05-28 19:06:26.532198 | orchestrator | Wednesday 28 May 2025 19:06:26 +0000 (0:00:02.274) 0:00:22.042 ********* 2025-05-28 19:06:26.532506 | orchestrator | =============================================================================== 2025-05-28 19:06:26.533005 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.90s 2025-05-28 19:06:26.534600 | orchestrator | Apply netplan configuration --------------------------------------------- 2.77s 2025-05-28 19:06:26.534742 | orchestrator | Install python3-docker -------------------------------------------------- 2.27s 2025-05-28 19:06:26.535568 | orchestrator | Apply netplan configuration --------------------------------------------- 2.06s 2025-05-28 19:06:26.535715 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.78s 2025-05-28 19:06:26.536292 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.75s 2025-05-28 19:06:26.536979 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-05-28 19:06:26.537463 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.61s 2025-05-28 19:06:26.539290 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.59s 2025-05-28 19:06:26.539925 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.85s 2025-05-28 19:06:26.540470 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.85s 2025-05-28 19:06:26.540618 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-05-28 19:06:27.165396 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-28 19:06:28.698410 | orchestrator | 2025-05-28 19:06:28 | INFO  | Task 12ddeced-b09f-47ee-b8cd-0ae99b07df1c (reboot) was prepared for execution. 2025-05-28 19:06:28.698536 | orchestrator | 2025-05-28 19:06:28 | INFO  | It takes a moment until task 12ddeced-b09f-47ee-b8cd-0ae99b07df1c (reboot) has been started and output is visible here. 2025-05-28 19:06:32.028556 | orchestrator | 2025-05-28 19:06:32.029235 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 19:06:32.030113 | orchestrator | 2025-05-28 19:06:32.031949 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 19:06:32.032405 | orchestrator | Wednesday 28 May 2025 19:06:32 +0000 (0:00:00.155) 0:00:00.155 ********* 2025-05-28 19:06:32.138250 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:06:32.139037 | orchestrator | 2025-05-28 19:06:32.139548 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 19:06:32.140494 | orchestrator | Wednesday 28 May 2025 19:06:32 +0000 (0:00:00.113) 0:00:00.268 ********* 2025-05-28 19:06:33.078637 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:06:33.078930 | orchestrator | 2025-05-28 19:06:33.078956 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 19:06:33.079276 | orchestrator | Wednesday 28 May 2025 19:06:33 +0000 (0:00:00.940) 0:00:01.208 ********* 2025-05-28 19:06:33.192339 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:06:33.192499 | orchestrator | 2025-05-28 19:06:33.193299 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 19:06:33.193904 | orchestrator | 2025-05-28 19:06:33.195662 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 19:06:33.195693 | orchestrator | Wednesday 28 May 2025 19:06:33 +0000 (0:00:00.111) 0:00:01.320 ********* 2025-05-28 19:06:33.297515 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:06:33.297819 | orchestrator | 2025-05-28 19:06:33.300078 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 19:06:33.301271 | orchestrator | Wednesday 28 May 2025 19:06:33 +0000 (0:00:00.106) 0:00:01.426 ********* 2025-05-28 19:06:33.944216 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:06:33.944441 | orchestrator | 2025-05-28 19:06:33.944491 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 19:06:33.945049 | orchestrator | Wednesday 28 May 2025 19:06:33 +0000 (0:00:00.647) 0:00:02.074 ********* 2025-05-28 19:06:34.063010 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:06:34.064192 | orchestrator | 2025-05-28 19:06:34.064354 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 19:06:34.065332 | orchestrator | 2025-05-28 19:06:34.067100 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 19:06:34.067693 | orchestrator | Wednesday 28 May 2025 19:06:34 +0000 (0:00:00.118) 0:00:02.192 ********* 2025-05-28 19:06:34.164375 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:06:34.164746 | orchestrator | 2025-05-28 19:06:34.165503 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 19:06:34.167182 | orchestrator | Wednesday 28 May 2025 19:06:34 +0000 (0:00:00.100) 0:00:02.293 ********* 2025-05-28 19:06:34.885970 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:06:34.886291 | orchestrator | 2025-05-28 19:06:34.887056 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 19:06:34.887929 | orchestrator | Wednesday 28 May 2025 19:06:34 +0000 (0:00:00.722) 0:00:03.015 ********* 2025-05-28 19:06:35.042373 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:06:35.042535 | orchestrator | 2025-05-28 19:06:35.042771 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 19:06:35.043190 | orchestrator | 2025-05-28 19:06:35.043435 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 19:06:35.044038 | orchestrator | Wednesday 28 May 2025 19:06:35 +0000 (0:00:00.153) 0:00:03.169 ********* 2025-05-28 19:06:35.137987 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:06:35.138253 | orchestrator | 2025-05-28 19:06:35.138779 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 19:06:35.141992 | orchestrator | Wednesday 28 May 2025 19:06:35 +0000 (0:00:00.098) 0:00:03.268 ********* 2025-05-28 19:06:35.761993 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:06:35.762706 | orchestrator | 2025-05-28 19:06:35.764099 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 19:06:35.764819 | orchestrator | Wednesday 28 May 2025 19:06:35 +0000 (0:00:00.622) 0:00:03.890 ********* 2025-05-28 19:06:35.883791 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:06:35.883969 | orchestrator | 2025-05-28 19:06:35.884163 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 19:06:35.885478 | orchestrator | 2025-05-28 19:06:35.886513 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 19:06:35.887220 | orchestrator | Wednesday 28 May 2025 19:06:35 +0000 (0:00:00.121) 0:00:04.011 ********* 2025-05-28 19:06:35.990788 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:06:35.991056 | orchestrator | 2025-05-28 19:06:35.991782 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 19:06:35.992022 | orchestrator | Wednesday 28 May 2025 19:06:35 +0000 (0:00:00.110) 0:00:04.121 ********* 2025-05-28 19:06:36.611249 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:06:36.611842 | orchestrator | 2025-05-28 19:06:36.612761 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 19:06:36.613531 | orchestrator | Wednesday 28 May 2025 19:06:36 +0000 (0:00:00.617) 0:00:04.739 ********* 2025-05-28 19:06:36.732381 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:06:36.732542 | orchestrator | 2025-05-28 19:06:36.732943 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-28 19:06:36.733231 | orchestrator | 2025-05-28 19:06:36.733508 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-28 19:06:36.733902 | orchestrator | Wednesday 28 May 2025 19:06:36 +0000 (0:00:00.120) 0:00:04.859 ********* 2025-05-28 19:06:36.844815 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:06:36.845574 | orchestrator | 2025-05-28 19:06:36.846115 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-28 19:06:36.846664 | orchestrator | Wednesday 28 May 2025 19:06:36 +0000 (0:00:00.114) 0:00:04.974 ********* 2025-05-28 19:06:37.478571 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:06:37.479123 | orchestrator | 2025-05-28 19:06:37.479685 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-28 19:06:37.480282 | orchestrator | Wednesday 28 May 2025 19:06:37 +0000 (0:00:00.632) 0:00:05.607 ********* 2025-05-28 19:06:37.515149 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:06:37.515790 | orchestrator | 2025-05-28 19:06:37.516213 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:06:37.516543 | orchestrator | 2025-05-28 19:06:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:06:37.517069 | orchestrator | 2025-05-28 19:06:37 | INFO  | Please wait and do not abort execution. 2025-05-28 19:06:37.517387 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:37.517893 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:37.518447 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:37.518786 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:37.519174 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:37.519520 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:06:37.520005 | orchestrator | 2025-05-28 19:06:37.520471 | orchestrator | Wednesday 28 May 2025 19:06:37 +0000 (0:00:00.038) 0:00:05.646 ********* 2025-05-28 19:06:37.521220 | orchestrator | =============================================================================== 2025-05-28 19:06:37.521927 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.18s 2025-05-28 19:06:37.522259 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2025-05-28 19:06:37.522771 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2025-05-28 19:06:38.076667 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-28 19:06:39.639998 | orchestrator | 2025-05-28 19:06:39 | INFO  | Task 48fdff97-a365-4eb5-8de7-f0089b0c71cf (wait-for-connection) was prepared for execution. 2025-05-28 19:06:39.640102 | orchestrator | 2025-05-28 19:06:39 | INFO  | It takes a moment until task 48fdff97-a365-4eb5-8de7-f0089b0c71cf (wait-for-connection) has been started and output is visible here. 2025-05-28 19:06:43.101060 | orchestrator | 2025-05-28 19:06:43.101249 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-28 19:06:43.102008 | orchestrator | 2025-05-28 19:06:43.103137 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-28 19:06:43.103277 | orchestrator | Wednesday 28 May 2025 19:06:43 +0000 (0:00:00.198) 0:00:00.198 ********* 2025-05-28 19:06:54.636009 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:06:54.636102 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:06:54.636111 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:06:54.636122 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:06:54.636133 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:06:54.636144 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:06:54.636203 | orchestrator | 2025-05-28 19:06:54.636216 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:06:54.636359 | orchestrator | 2025-05-28 19:06:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:06:54.636378 | orchestrator | 2025-05-28 19:06:54 | INFO  | Please wait and do not abort execution. 2025-05-28 19:06:54.638364 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:06:54.639078 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:06:54.639907 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:06:54.640926 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:06:54.641754 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:06:54.642510 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:06:54.643317 | orchestrator | 2025-05-28 19:06:54.643812 | orchestrator | Wednesday 28 May 2025 19:06:54 +0000 (0:00:11.535) 0:00:11.733 ********* 2025-05-28 19:06:54.644911 | orchestrator | =============================================================================== 2025-05-28 19:06:54.645248 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2025-05-28 19:06:55.267401 | orchestrator | + osism apply hddtemp 2025-05-28 19:06:56.776185 | orchestrator | 2025-05-28 19:06:56 | INFO  | Task 6e90dc58-6060-401b-8c80-0b0b2337c19e (hddtemp) was prepared for execution. 2025-05-28 19:06:56.776265 | orchestrator | 2025-05-28 19:06:56 | INFO  | It takes a moment until task 6e90dc58-6060-401b-8c80-0b0b2337c19e (hddtemp) has been started and output is visible here. 2025-05-28 19:06:59.925188 | orchestrator | 2025-05-28 19:06:59.929596 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-28 19:06:59.929629 | orchestrator | 2025-05-28 19:06:59.929984 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-28 19:06:59.930331 | orchestrator | Wednesday 28 May 2025 19:06:59 +0000 (0:00:00.196) 0:00:00.196 ********* 2025-05-28 19:07:00.091658 | orchestrator | ok: [testbed-manager] 2025-05-28 19:07:00.187370 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:07:00.271414 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:07:00.360468 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:07:00.442103 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:07:00.681315 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:07:00.682209 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:07:00.682569 | orchestrator | 2025-05-28 19:07:00.683677 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-28 19:07:00.684266 | orchestrator | Wednesday 28 May 2025 19:07:00 +0000 (0:00:00.755) 0:00:00.951 ********* 2025-05-28 19:07:01.845437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:07:01.845568 | orchestrator | 2025-05-28 19:07:01.845641 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-28 19:07:01.846591 | orchestrator | Wednesday 28 May 2025 19:07:01 +0000 (0:00:01.163) 0:00:02.115 ********* 2025-05-28 19:07:03.652150 | orchestrator | ok: [testbed-manager] 2025-05-28 19:07:03.652768 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:07:03.654427 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:07:03.655207 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:07:03.656198 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:07:03.657037 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:07:03.657418 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:07:03.658323 | orchestrator | 2025-05-28 19:07:03.659053 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-28 19:07:03.661469 | orchestrator | Wednesday 28 May 2025 19:07:03 +0000 (0:00:01.808) 0:00:03.923 ********* 2025-05-28 19:07:04.189677 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:07:04.278931 | orchestrator | changed: [testbed-manager] 2025-05-28 19:07:04.753231 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:07:04.754430 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:07:04.755772 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:07:04.756733 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:07:04.757968 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:07:04.759711 | orchestrator | 2025-05-28 19:07:04.761238 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-28 19:07:04.761453 | orchestrator | Wednesday 28 May 2025 19:07:04 +0000 (0:00:01.098) 0:00:05.021 ********* 2025-05-28 19:07:06.062889 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:07:06.063859 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:07:06.067241 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:07:06.067293 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:07:06.067300 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:07:06.068114 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:07:06.068998 | orchestrator | ok: [testbed-manager] 2025-05-28 19:07:06.069743 | orchestrator | 2025-05-28 19:07:06.070006 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-28 19:07:06.070524 | orchestrator | Wednesday 28 May 2025 19:07:06 +0000 (0:00:01.311) 0:00:06.332 ********* 2025-05-28 19:07:06.327235 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:07:06.426747 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:07:06.515314 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:07:06.595479 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:07:06.719152 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:07:06.719305 | orchestrator | changed: [testbed-manager] 2025-05-28 19:07:06.720564 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:07:06.721256 | orchestrator | 2025-05-28 19:07:06.722214 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-28 19:07:06.722541 | orchestrator | Wednesday 28 May 2025 19:07:06 +0000 (0:00:00.660) 0:00:06.993 ********* 2025-05-28 19:07:18.771204 | orchestrator | changed: [testbed-manager] 2025-05-28 19:07:18.771322 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:07:18.772099 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:07:18.772126 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:07:18.774382 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:07:18.776145 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:07:18.777368 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:07:18.778121 | orchestrator | 2025-05-28 19:07:18.779249 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-28 19:07:18.780250 | orchestrator | Wednesday 28 May 2025 19:07:18 +0000 (0:00:12.043) 0:00:19.036 ********* 2025-05-28 19:07:20.161560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:07:20.163269 | orchestrator | 2025-05-28 19:07:20.163317 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-28 19:07:20.163333 | orchestrator | Wednesday 28 May 2025 19:07:20 +0000 (0:00:01.395) 0:00:20.432 ********* 2025-05-28 19:07:22.103459 | orchestrator | changed: [testbed-manager] 2025-05-28 19:07:22.103576 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:07:22.103593 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:07:22.103679 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:07:22.104684 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:07:22.104735 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:07:22.105000 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:07:22.106074 | orchestrator | 2025-05-28 19:07:22.107397 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:07:22.107530 | orchestrator | 2025-05-28 19:07:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:07:22.107882 | orchestrator | 2025-05-28 19:07:22 | INFO  | Please wait and do not abort execution. 2025-05-28 19:07:22.108531 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:07:22.109630 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:22.109714 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:22.110192 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:22.110522 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:22.110884 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:22.111280 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:22.112009 | orchestrator | 2025-05-28 19:07:22.112150 | orchestrator | Wednesday 28 May 2025 19:07:22 +0000 (0:00:01.944) 0:00:22.377 ********* 2025-05-28 19:07:22.112631 | orchestrator | =============================================================================== 2025-05-28 19:07:22.113306 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.04s 2025-05-28 19:07:22.113407 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2025-05-28 19:07:22.114128 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.81s 2025-05-28 19:07:22.115471 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-05-28 19:07:22.116169 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.31s 2025-05-28 19:07:22.116418 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.16s 2025-05-28 19:07:22.116450 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.10s 2025-05-28 19:07:22.116798 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.76s 2025-05-28 19:07:22.117198 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.66s 2025-05-28 19:07:22.792456 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-28 19:07:24.614011 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-28 19:07:24.614178 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-28 19:07:24.614213 | orchestrator | + local max_attempts=60 2025-05-28 19:07:24.614227 | orchestrator | + local name=ceph-ansible 2025-05-28 19:07:24.614238 | orchestrator | + local attempt_num=1 2025-05-28 19:07:24.614352 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-28 19:07:24.645290 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 19:07:24.645393 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-28 19:07:24.645409 | orchestrator | + local max_attempts=60 2025-05-28 19:07:24.645423 | orchestrator | + local name=kolla-ansible 2025-05-28 19:07:24.645434 | orchestrator | + local attempt_num=1 2025-05-28 19:07:24.645925 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-28 19:07:24.674301 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 19:07:24.674375 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-28 19:07:24.674385 | orchestrator | + local max_attempts=60 2025-05-28 19:07:24.674421 | orchestrator | + local name=osism-ansible 2025-05-28 19:07:24.674430 | orchestrator | + local attempt_num=1 2025-05-28 19:07:24.675349 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-28 19:07:24.710098 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-28 19:07:24.710166 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-28 19:07:24.710178 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-28 19:07:24.865179 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-28 19:07:25.049471 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-28 19:07:25.204002 | orchestrator | ARA in osism-ansible already disabled. 2025-05-28 19:07:25.380942 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-28 19:07:25.382144 | orchestrator | + osism apply gather-facts 2025-05-28 19:07:26.859489 | orchestrator | 2025-05-28 19:07:26 | INFO  | Task 60976978-76ae-4f5f-b48d-b8648ec7529d (gather-facts) was prepared for execution. 2025-05-28 19:07:26.859625 | orchestrator | 2025-05-28 19:07:26 | INFO  | It takes a moment until task 60976978-76ae-4f5f-b48d-b8648ec7529d (gather-facts) has been started and output is visible here. 2025-05-28 19:07:30.158802 | orchestrator | 2025-05-28 19:07:30.158975 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 19:07:30.159085 | orchestrator | 2025-05-28 19:07:30.159103 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 19:07:30.162118 | orchestrator | Wednesday 28 May 2025 19:07:30 +0000 (0:00:00.181) 0:00:00.181 ********* 2025-05-28 19:07:35.172409 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:07:35.172593 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:07:35.172613 | orchestrator | ok: [testbed-manager] 2025-05-28 19:07:35.173086 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:07:35.173741 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:07:35.174097 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:07:35.174399 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:07:35.175563 | orchestrator | 2025-05-28 19:07:35.175588 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-28 19:07:35.175601 | orchestrator | 2025-05-28 19:07:35.179110 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-28 19:07:35.179139 | orchestrator | Wednesday 28 May 2025 19:07:35 +0000 (0:00:05.017) 0:00:05.198 ********* 2025-05-28 19:07:35.337609 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:07:35.428449 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:07:35.529702 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:07:35.607075 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:07:35.688067 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:07:35.727739 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:07:35.729184 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:07:35.729502 | orchestrator | 2025-05-28 19:07:35.731420 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:07:35.731504 | orchestrator | 2025-05-28 19:07:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:07:35.731521 | orchestrator | 2025-05-28 19:07:35 | INFO  | Please wait and do not abort execution. 2025-05-28 19:07:35.732869 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:35.734091 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:35.734955 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:35.735439 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:35.735987 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:35.736643 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:35.737191 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:07:35.737698 | orchestrator | 2025-05-28 19:07:35.738875 | orchestrator | Wednesday 28 May 2025 19:07:35 +0000 (0:00:00.554) 0:00:05.753 ********* 2025-05-28 19:07:35.738900 | orchestrator | =============================================================================== 2025-05-28 19:07:35.739713 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.02s 2025-05-28 19:07:35.740424 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-05-28 19:07:36.314562 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-28 19:07:36.331349 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-28 19:07:36.354819 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-28 19:07:36.379586 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-28 19:07:36.393976 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-28 19:07:36.413388 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-28 19:07:36.428529 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-28 19:07:36.448775 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-28 19:07:36.463888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-28 19:07:36.485238 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-28 19:07:36.503417 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-28 19:07:36.522549 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-28 19:07:36.540028 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-28 19:07:36.561530 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-28 19:07:36.584215 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-28 19:07:36.606801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-28 19:07:36.623317 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-28 19:07:36.639251 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-28 19:07:36.662272 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-28 19:07:36.678237 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-28 19:07:36.691479 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-28 19:07:36.803870 | orchestrator | ok: Runtime: 0:25:23.403902 2025-05-28 19:07:36.915121 | 2025-05-28 19:07:36.915283 | TASK [Deploy services] 2025-05-28 19:07:37.449021 | orchestrator | skipping: Conditional result was False 2025-05-28 19:07:37.468872 | 2025-05-28 19:07:37.469081 | TASK [Deploy in a nutshell] 2025-05-28 19:07:38.158072 | orchestrator | + set -e 2025-05-28 19:07:38.158295 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-28 19:07:38.158321 | orchestrator | ++ export INTERACTIVE=false 2025-05-28 19:07:38.158343 | orchestrator | ++ INTERACTIVE=false 2025-05-28 19:07:38.158357 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-28 19:07:38.158370 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-28 19:07:38.158384 | orchestrator | + source /opt/manager-vars.sh 2025-05-28 19:07:38.158431 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-28 19:07:38.158460 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-28 19:07:38.158476 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-28 19:07:38.158491 | orchestrator | ++ CEPH_VERSION=reef 2025-05-28 19:07:38.158503 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-28 19:07:38.158521 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-28 19:07:38.158533 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-28 19:07:38.158554 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-28 19:07:38.158579 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-28 19:07:38.158594 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-28 19:07:38.158606 | orchestrator | ++ export ARA=false 2025-05-28 19:07:38.158617 | orchestrator | ++ ARA=false 2025-05-28 19:07:38.158628 | orchestrator | ++ export TEMPEST=false 2025-05-28 19:07:38.158640 | orchestrator | ++ TEMPEST=false 2025-05-28 19:07:38.158651 | orchestrator | ++ export IS_ZUUL=true 2025-05-28 19:07:38.158661 | orchestrator | ++ IS_ZUUL=true 2025-05-28 19:07:38.158673 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.158 2025-05-28 19:07:38.158684 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.158 2025-05-28 19:07:38.158695 | orchestrator | ++ export EXTERNAL_API=false 2025-05-28 19:07:38.158706 | orchestrator | ++ EXTERNAL_API=false 2025-05-28 19:07:38.158717 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-28 19:07:38.158728 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-28 19:07:38.158739 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-28 19:07:38.158750 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-28 19:07:38.158761 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-28 19:07:38.158772 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-28 19:07:38.158783 | orchestrator | + echo 2025-05-28 19:07:38.158795 | orchestrator | 2025-05-28 19:07:38.158806 | orchestrator | # PULL IMAGES 2025-05-28 19:07:38.158818 | orchestrator | 2025-05-28 19:07:38.158828 | orchestrator | + echo '# PULL IMAGES' 2025-05-28 19:07:38.158897 | orchestrator | + echo 2025-05-28 19:07:38.159656 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-28 19:07:38.219958 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-28 19:07:38.220064 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-28 19:07:39.713559 | orchestrator | 2025-05-28 19:07:39 | INFO  | Trying to run play pull-images in environment custom 2025-05-28 19:07:39.764279 | orchestrator | 2025-05-28 19:07:39 | INFO  | Task 43bc0e69-83be-479c-8142-eb1dcd04a010 (pull-images) was prepared for execution. 2025-05-28 19:07:39.764387 | orchestrator | 2025-05-28 19:07:39 | INFO  | It takes a moment until task 43bc0e69-83be-479c-8142-eb1dcd04a010 (pull-images) has been started and output is visible here. 2025-05-28 19:07:42.997061 | orchestrator | 2025-05-28 19:07:43.000036 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-28 19:07:43.000094 | orchestrator | 2025-05-28 19:07:43.000116 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-28 19:07:43.000671 | orchestrator | Wednesday 28 May 2025 19:07:42 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-05-28 19:08:19.030154 | orchestrator | changed: [testbed-manager] 2025-05-28 19:08:19.030297 | orchestrator | 2025-05-28 19:08:19.030332 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-28 19:08:19.031063 | orchestrator | Wednesday 28 May 2025 19:08:19 +0000 (0:00:36.033) 0:00:36.186 ********* 2025-05-28 19:09:11.002339 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-28 19:09:11.002469 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-28 19:09:11.002485 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-28 19:09:11.002496 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-28 19:09:11.002506 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-28 19:09:11.002886 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-28 19:09:11.003127 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-28 19:09:11.004307 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-28 19:09:11.007427 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-28 19:09:11.009274 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-28 19:09:11.010459 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-28 19:09:11.011880 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-28 19:09:11.012938 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-28 19:09:11.013925 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-28 19:09:11.015118 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-28 19:09:11.015677 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-28 19:09:11.016567 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-28 19:09:11.017407 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-28 19:09:11.017957 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-28 19:09:11.018763 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-28 19:09:11.019355 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-28 19:09:11.019907 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-28 19:09:11.020638 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-28 19:09:11.021585 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-28 19:09:11.022071 | orchestrator | 2025-05-28 19:09:11.022973 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:09:11.023303 | orchestrator | 2025-05-28 19:09:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:09:11.023419 | orchestrator | 2025-05-28 19:09:11 | INFO  | Please wait and do not abort execution. 2025-05-28 19:09:11.024926 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:09:11.025293 | orchestrator | 2025-05-28 19:09:11.025923 | orchestrator | Wednesday 28 May 2025 19:09:10 +0000 (0:00:51.971) 0:01:28.157 ********* 2025-05-28 19:09:11.026917 | orchestrator | =============================================================================== 2025-05-28 19:09:11.027431 | orchestrator | Pull other images ------------------------------------------------------ 51.97s 2025-05-28 19:09:11.028251 | orchestrator | Pull keystone image ---------------------------------------------------- 36.03s 2025-05-28 19:09:13.236495 | orchestrator | 2025-05-28 19:09:13 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-28 19:09:13.287585 | orchestrator | 2025-05-28 19:09:13 | INFO  | Task cbc2eed1-d616-4f51-b0ec-e696be9a5462 (wipe-partitions) was prepared for execution. 2025-05-28 19:09:13.287722 | orchestrator | 2025-05-28 19:09:13 | INFO  | It takes a moment until task cbc2eed1-d616-4f51-b0ec-e696be9a5462 (wipe-partitions) has been started and output is visible here. 2025-05-28 19:09:16.531672 | orchestrator | 2025-05-28 19:09:16.532449 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-28 19:09:16.537309 | orchestrator | 2025-05-28 19:09:16.537352 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-28 19:09:16.538084 | orchestrator | Wednesday 28 May 2025 19:09:16 +0000 (0:00:00.141) 0:00:00.141 ********* 2025-05-28 19:09:17.112757 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:09:17.112909 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:09:17.113271 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:09:17.115571 | orchestrator | 2025-05-28 19:09:17.115763 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-28 19:09:17.116178 | orchestrator | Wednesday 28 May 2025 19:09:17 +0000 (0:00:00.586) 0:00:00.728 ********* 2025-05-28 19:09:17.285180 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:17.382890 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:09:17.382970 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:09:17.383286 | orchestrator | 2025-05-28 19:09:17.383554 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-28 19:09:17.384089 | orchestrator | Wednesday 28 May 2025 19:09:17 +0000 (0:00:00.270) 0:00:00.998 ********* 2025-05-28 19:09:18.210183 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:09:18.210457 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:09:18.210481 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:09:18.210827 | orchestrator | 2025-05-28 19:09:18.211280 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-28 19:09:18.211714 | orchestrator | Wednesday 28 May 2025 19:09:18 +0000 (0:00:00.825) 0:00:01.823 ********* 2025-05-28 19:09:18.409371 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:18.512934 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:09:18.513020 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:09:18.513094 | orchestrator | 2025-05-28 19:09:18.513365 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-28 19:09:18.513573 | orchestrator | Wednesday 28 May 2025 19:09:18 +0000 (0:00:00.305) 0:00:02.129 ********* 2025-05-28 19:09:19.725129 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-28 19:09:19.725917 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-28 19:09:19.726167 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-28 19:09:19.726344 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-28 19:09:19.726602 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-28 19:09:19.726996 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-28 19:09:19.727312 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-28 19:09:19.727862 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-28 19:09:19.728192 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-28 19:09:19.728434 | orchestrator | 2025-05-28 19:09:19.728793 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-28 19:09:19.729187 | orchestrator | Wednesday 28 May 2025 19:09:19 +0000 (0:00:01.212) 0:00:03.341 ********* 2025-05-28 19:09:21.199664 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-28 19:09:21.199865 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-28 19:09:21.199989 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-28 19:09:21.200318 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-28 19:09:21.203198 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-28 19:09:21.204281 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-28 19:09:21.204322 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-28 19:09:21.204338 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-28 19:09:21.204353 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-28 19:09:21.204369 | orchestrator | 2025-05-28 19:09:21.204385 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-28 19:09:21.204457 | orchestrator | Wednesday 28 May 2025 19:09:21 +0000 (0:00:01.472) 0:00:04.814 ********* 2025-05-28 19:09:23.613009 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-28 19:09:23.613533 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-28 19:09:23.614314 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-28 19:09:23.615265 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-28 19:09:23.615520 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-28 19:09:23.618306 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-28 19:09:23.618331 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-28 19:09:23.618342 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-28 19:09:23.618354 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-28 19:09:23.618545 | orchestrator | 2025-05-28 19:09:23.618796 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-28 19:09:23.619154 | orchestrator | Wednesday 28 May 2025 19:09:23 +0000 (0:00:02.412) 0:00:07.227 ********* 2025-05-28 19:09:24.231219 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:09:24.231310 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:09:24.232083 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:09:24.233513 | orchestrator | 2025-05-28 19:09:24.234696 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-28 19:09:24.235193 | orchestrator | Wednesday 28 May 2025 19:09:24 +0000 (0:00:00.617) 0:00:07.844 ********* 2025-05-28 19:09:24.886455 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:09:24.887194 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:09:24.887791 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:09:24.889973 | orchestrator | 2025-05-28 19:09:24.891425 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:09:24.891478 | orchestrator | 2025-05-28 19:09:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:09:24.891492 | orchestrator | 2025-05-28 19:09:24 | INFO  | Please wait and do not abort execution. 2025-05-28 19:09:24.893027 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:24.893883 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:24.894936 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:24.895935 | orchestrator | 2025-05-28 19:09:24.896913 | orchestrator | Wednesday 28 May 2025 19:09:24 +0000 (0:00:00.655) 0:00:08.500 ********* 2025-05-28 19:09:24.897368 | orchestrator | =============================================================================== 2025-05-28 19:09:24.898099 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.41s 2025-05-28 19:09:24.899465 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.47s 2025-05-28 19:09:24.903008 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2025-05-28 19:09:24.904223 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.83s 2025-05-28 19:09:24.905100 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-05-28 19:09:24.905611 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-05-28 19:09:24.906635 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-05-28 19:09:24.907188 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.31s 2025-05-28 19:09:24.907535 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-05-28 19:09:26.996289 | orchestrator | 2025-05-28 19:09:26 | INFO  | Task b70d7dac-aa6f-4c7d-a830-6558bb01a0fb (facts) was prepared for execution. 2025-05-28 19:09:26.996397 | orchestrator | 2025-05-28 19:09:26 | INFO  | It takes a moment until task b70d7dac-aa6f-4c7d-a830-6558bb01a0fb (facts) has been started and output is visible here. 2025-05-28 19:09:31.293623 | orchestrator | 2025-05-28 19:09:31.296244 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-28 19:09:31.296287 | orchestrator | 2025-05-28 19:09:31.296676 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-28 19:09:31.297065 | orchestrator | Wednesday 28 May 2025 19:09:31 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-05-28 19:09:32.562193 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:09:32.563168 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:09:32.564049 | orchestrator | ok: [testbed-manager] 2025-05-28 19:09:32.565194 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:09:32.566458 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:09:32.567898 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:09:32.568355 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:09:32.569958 | orchestrator | 2025-05-28 19:09:32.570370 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-28 19:09:32.571342 | orchestrator | Wednesday 28 May 2025 19:09:32 +0000 (0:00:01.268) 0:00:01.517 ********* 2025-05-28 19:09:32.733336 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:09:32.817732 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:09:32.901449 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:09:32.983617 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:09:33.074371 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:33.819278 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:09:33.821490 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:09:33.822789 | orchestrator | 2025-05-28 19:09:33.824616 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 19:09:33.826104 | orchestrator | 2025-05-28 19:09:33.827440 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 19:09:33.830331 | orchestrator | Wednesday 28 May 2025 19:09:33 +0000 (0:00:01.261) 0:00:02.779 ********* 2025-05-28 19:09:38.598600 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:09:38.598694 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:09:38.598709 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:09:38.599004 | orchestrator | ok: [testbed-manager] 2025-05-28 19:09:38.599575 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:09:38.605539 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:09:38.606080 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:09:38.606415 | orchestrator | 2025-05-28 19:09:38.606748 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-28 19:09:38.607867 | orchestrator | 2025-05-28 19:09:38.609358 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-28 19:09:38.609495 | orchestrator | Wednesday 28 May 2025 19:09:38 +0000 (0:00:04.776) 0:00:07.556 ********* 2025-05-28 19:09:39.052827 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:09:39.179344 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:09:39.284492 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:09:39.374073 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:09:39.457014 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:39.508664 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:09:39.509278 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:09:39.509822 | orchestrator | 2025-05-28 19:09:39.510511 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:09:39.510864 | orchestrator | 2025-05-28 19:09:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:09:39.511374 | orchestrator | 2025-05-28 19:09:39 | INFO  | Please wait and do not abort execution. 2025-05-28 19:09:39.512362 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:39.512988 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:39.513484 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:39.514974 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:39.515161 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:39.515623 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:39.516252 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:09:39.516890 | orchestrator | 2025-05-28 19:09:39.519517 | orchestrator | Wednesday 28 May 2025 19:09:39 +0000 (0:00:00.914) 0:00:08.470 ********* 2025-05-28 19:09:39.519748 | orchestrator | =============================================================================== 2025-05-28 19:09:39.520625 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2025-05-28 19:09:39.520669 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.27s 2025-05-28 19:09:39.520689 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-05-28 19:09:39.520707 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.91s 2025-05-28 19:09:41.748854 | orchestrator | 2025-05-28 19:09:41 | INFO  | Task da2f488d-29c5-47b2-93ac-13c76a5ca4d4 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-28 19:09:41.748948 | orchestrator | 2025-05-28 19:09:41 | INFO  | It takes a moment until task da2f488d-29c5-47b2-93ac-13c76a5ca4d4 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-28 19:09:45.128104 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-28 19:09:45.760671 | orchestrator | 2025-05-28 19:09:45.763200 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-28 19:09:45.763359 | orchestrator | 2025-05-28 19:09:45.764559 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 19:09:45.764949 | orchestrator | Wednesday 28 May 2025 19:09:45 +0000 (0:00:00.529) 0:00:00.529 ********* 2025-05-28 19:09:46.034516 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 19:09:46.034648 | orchestrator | 2025-05-28 19:09:46.034665 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 19:09:46.037516 | orchestrator | Wednesday 28 May 2025 19:09:46 +0000 (0:00:00.274) 0:00:00.803 ********* 2025-05-28 19:09:46.293642 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:09:46.293763 | orchestrator | 2025-05-28 19:09:46.295733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:46.295769 | orchestrator | Wednesday 28 May 2025 19:09:46 +0000 (0:00:00.259) 0:00:01.063 ********* 2025-05-28 19:09:46.849283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-28 19:09:46.851842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-28 19:09:46.854631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-28 19:09:46.856238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-28 19:09:46.856277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-28 19:09:46.857114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-28 19:09:46.861009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-28 19:09:46.861104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-28 19:09:46.861119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-28 19:09:46.861132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-28 19:09:46.861144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-28 19:09:46.861155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-28 19:09:46.861166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-28 19:09:46.861178 | orchestrator | 2025-05-28 19:09:46.861897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:46.861922 | orchestrator | Wednesday 28 May 2025 19:09:46 +0000 (0:00:00.555) 0:00:01.619 ********* 2025-05-28 19:09:47.054269 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:47.056109 | orchestrator | 2025-05-28 19:09:47.058469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:47.059999 | orchestrator | Wednesday 28 May 2025 19:09:47 +0000 (0:00:00.206) 0:00:01.825 ********* 2025-05-28 19:09:47.287745 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:47.288504 | orchestrator | 2025-05-28 19:09:47.288562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:47.292517 | orchestrator | Wednesday 28 May 2025 19:09:47 +0000 (0:00:00.233) 0:00:02.058 ********* 2025-05-28 19:09:47.552683 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:47.552777 | orchestrator | 2025-05-28 19:09:47.552790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:47.553500 | orchestrator | Wednesday 28 May 2025 19:09:47 +0000 (0:00:00.262) 0:00:02.321 ********* 2025-05-28 19:09:47.777374 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:47.778603 | orchestrator | 2025-05-28 19:09:47.780194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:47.780404 | orchestrator | Wednesday 28 May 2025 19:09:47 +0000 (0:00:00.229) 0:00:02.550 ********* 2025-05-28 19:09:48.012410 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:48.014083 | orchestrator | 2025-05-28 19:09:48.014149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:48.014191 | orchestrator | Wednesday 28 May 2025 19:09:48 +0000 (0:00:00.231) 0:00:02.781 ********* 2025-05-28 19:09:48.233873 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:48.234102 | orchestrator | 2025-05-28 19:09:48.234122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:48.234202 | orchestrator | Wednesday 28 May 2025 19:09:48 +0000 (0:00:00.222) 0:00:03.004 ********* 2025-05-28 19:09:48.450779 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:48.451392 | orchestrator | 2025-05-28 19:09:48.452116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:48.453126 | orchestrator | Wednesday 28 May 2025 19:09:48 +0000 (0:00:00.218) 0:00:03.223 ********* 2025-05-28 19:09:48.744142 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:48.747336 | orchestrator | 2025-05-28 19:09:48.748496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:48.748522 | orchestrator | Wednesday 28 May 2025 19:09:48 +0000 (0:00:00.287) 0:00:03.511 ********* 2025-05-28 19:09:49.553456 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e) 2025-05-28 19:09:49.555000 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e) 2025-05-28 19:09:49.555040 | orchestrator | 2025-05-28 19:09:49.555945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:49.557306 | orchestrator | Wednesday 28 May 2025 19:09:49 +0000 (0:00:00.815) 0:00:04.326 ********* 2025-05-28 19:09:50.504457 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801) 2025-05-28 19:09:50.505393 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801) 2025-05-28 19:09:50.507360 | orchestrator | 2025-05-28 19:09:50.509872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:50.511356 | orchestrator | Wednesday 28 May 2025 19:09:50 +0000 (0:00:00.948) 0:00:05.274 ********* 2025-05-28 19:09:50.956333 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4) 2025-05-28 19:09:50.956921 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4) 2025-05-28 19:09:50.957924 | orchestrator | 2025-05-28 19:09:50.959082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:50.959280 | orchestrator | Wednesday 28 May 2025 19:09:50 +0000 (0:00:00.452) 0:00:05.727 ********* 2025-05-28 19:09:51.662425 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1) 2025-05-28 19:09:51.665740 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1) 2025-05-28 19:09:51.665773 | orchestrator | 2025-05-28 19:09:51.666584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:09:51.667425 | orchestrator | Wednesday 28 May 2025 19:09:51 +0000 (0:00:00.704) 0:00:06.432 ********* 2025-05-28 19:09:52.034364 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 19:09:52.034470 | orchestrator | 2025-05-28 19:09:52.034763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:52.035385 | orchestrator | Wednesday 28 May 2025 19:09:52 +0000 (0:00:00.374) 0:00:06.806 ********* 2025-05-28 19:09:52.550667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-28 19:09:52.551338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-28 19:09:52.554246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-28 19:09:52.555472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-28 19:09:52.556042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-28 19:09:52.557288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-28 19:09:52.557848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-28 19:09:52.558239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-28 19:09:52.558866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-28 19:09:52.559698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-28 19:09:52.559721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-28 19:09:52.561633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-28 19:09:52.561698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-28 19:09:52.561714 | orchestrator | 2025-05-28 19:09:52.561727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:52.561738 | orchestrator | Wednesday 28 May 2025 19:09:52 +0000 (0:00:00.517) 0:00:07.324 ********* 2025-05-28 19:09:52.785440 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:52.786120 | orchestrator | 2025-05-28 19:09:52.786537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:52.787177 | orchestrator | Wednesday 28 May 2025 19:09:52 +0000 (0:00:00.233) 0:00:07.557 ********* 2025-05-28 19:09:53.001123 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:53.002449 | orchestrator | 2025-05-28 19:09:53.002529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:53.002545 | orchestrator | Wednesday 28 May 2025 19:09:52 +0000 (0:00:00.215) 0:00:07.773 ********* 2025-05-28 19:09:53.228278 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:53.228401 | orchestrator | 2025-05-28 19:09:53.228419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:53.228442 | orchestrator | Wednesday 28 May 2025 19:09:53 +0000 (0:00:00.223) 0:00:07.997 ********* 2025-05-28 19:09:53.545059 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:53.545200 | orchestrator | 2025-05-28 19:09:53.545695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:53.546182 | orchestrator | Wednesday 28 May 2025 19:09:53 +0000 (0:00:00.319) 0:00:08.316 ********* 2025-05-28 19:09:54.272236 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:54.275978 | orchestrator | 2025-05-28 19:09:54.276310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:54.277400 | orchestrator | Wednesday 28 May 2025 19:09:54 +0000 (0:00:00.728) 0:00:09.045 ********* 2025-05-28 19:09:54.519784 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:54.520079 | orchestrator | 2025-05-28 19:09:54.521171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:54.521613 | orchestrator | Wednesday 28 May 2025 19:09:54 +0000 (0:00:00.247) 0:00:09.292 ********* 2025-05-28 19:09:54.795632 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:54.795712 | orchestrator | 2025-05-28 19:09:54.795724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:54.795736 | orchestrator | Wednesday 28 May 2025 19:09:54 +0000 (0:00:00.273) 0:00:09.566 ********* 2025-05-28 19:09:55.049908 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:55.050223 | orchestrator | 2025-05-28 19:09:55.050246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:55.050259 | orchestrator | Wednesday 28 May 2025 19:09:55 +0000 (0:00:00.249) 0:00:09.816 ********* 2025-05-28 19:09:55.771550 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-28 19:09:55.771647 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-28 19:09:55.771664 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-28 19:09:55.772963 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-28 19:09:55.778063 | orchestrator | 2025-05-28 19:09:55.778099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:55.778112 | orchestrator | Wednesday 28 May 2025 19:09:55 +0000 (0:00:00.725) 0:00:10.542 ********* 2025-05-28 19:09:56.029562 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:56.030153 | orchestrator | 2025-05-28 19:09:56.031285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:56.034270 | orchestrator | Wednesday 28 May 2025 19:09:56 +0000 (0:00:00.259) 0:00:10.802 ********* 2025-05-28 19:09:56.304304 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:56.305587 | orchestrator | 2025-05-28 19:09:56.306067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:56.306438 | orchestrator | Wednesday 28 May 2025 19:09:56 +0000 (0:00:00.275) 0:00:11.077 ********* 2025-05-28 19:09:56.563189 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:56.563294 | orchestrator | 2025-05-28 19:09:56.563309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:09:56.563323 | orchestrator | Wednesday 28 May 2025 19:09:56 +0000 (0:00:00.257) 0:00:11.334 ********* 2025-05-28 19:09:56.793636 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:56.793748 | orchestrator | 2025-05-28 19:09:56.793765 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-28 19:09:56.793779 | orchestrator | Wednesday 28 May 2025 19:09:56 +0000 (0:00:00.218) 0:00:11.553 ********* 2025-05-28 19:09:57.048973 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-28 19:09:57.052046 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-28 19:09:57.054789 | orchestrator | 2025-05-28 19:09:57.058536 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-28 19:09:57.061488 | orchestrator | Wednesday 28 May 2025 19:09:57 +0000 (0:00:00.268) 0:00:11.821 ********* 2025-05-28 19:09:57.197284 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:57.198304 | orchestrator | 2025-05-28 19:09:57.198340 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-28 19:09:57.198707 | orchestrator | Wednesday 28 May 2025 19:09:57 +0000 (0:00:00.147) 0:00:11.969 ********* 2025-05-28 19:09:57.621316 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:57.622900 | orchestrator | 2025-05-28 19:09:57.623363 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-28 19:09:57.623856 | orchestrator | Wednesday 28 May 2025 19:09:57 +0000 (0:00:00.423) 0:00:12.392 ********* 2025-05-28 19:09:57.757415 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:57.757985 | orchestrator | 2025-05-28 19:09:57.758925 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-28 19:09:57.759520 | orchestrator | Wednesday 28 May 2025 19:09:57 +0000 (0:00:00.137) 0:00:12.530 ********* 2025-05-28 19:09:57.917215 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:09:57.918320 | orchestrator | 2025-05-28 19:09:57.919956 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-28 19:09:57.921012 | orchestrator | Wednesday 28 May 2025 19:09:57 +0000 (0:00:00.158) 0:00:12.689 ********* 2025-05-28 19:09:58.157230 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}}) 2025-05-28 19:09:58.157857 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '117a45ef-4e6c-5b76-bea4-f0c196d92690'}}) 2025-05-28 19:09:58.159045 | orchestrator | 2025-05-28 19:09:58.160198 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-28 19:09:58.161324 | orchestrator | Wednesday 28 May 2025 19:09:58 +0000 (0:00:00.238) 0:00:12.927 ********* 2025-05-28 19:09:58.351887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}})  2025-05-28 19:09:58.353941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '117a45ef-4e6c-5b76-bea4-f0c196d92690'}})  2025-05-28 19:09:58.356845 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:58.357418 | orchestrator | 2025-05-28 19:09:58.358105 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-28 19:09:58.359133 | orchestrator | Wednesday 28 May 2025 19:09:58 +0000 (0:00:00.195) 0:00:13.122 ********* 2025-05-28 19:09:58.562658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}})  2025-05-28 19:09:58.563965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '117a45ef-4e6c-5b76-bea4-f0c196d92690'}})  2025-05-28 19:09:58.564003 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:58.564827 | orchestrator | 2025-05-28 19:09:58.565341 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-28 19:09:58.566311 | orchestrator | Wednesday 28 May 2025 19:09:58 +0000 (0:00:00.212) 0:00:13.334 ********* 2025-05-28 19:09:58.739152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}})  2025-05-28 19:09:58.740140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '117a45ef-4e6c-5b76-bea4-f0c196d92690'}})  2025-05-28 19:09:58.741581 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:58.742066 | orchestrator | 2025-05-28 19:09:58.743415 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-28 19:09:58.744764 | orchestrator | Wednesday 28 May 2025 19:09:58 +0000 (0:00:00.175) 0:00:13.510 ********* 2025-05-28 19:09:58.883886 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:09:58.884786 | orchestrator | 2025-05-28 19:09:58.885308 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-28 19:09:58.886269 | orchestrator | Wednesday 28 May 2025 19:09:58 +0000 (0:00:00.146) 0:00:13.657 ********* 2025-05-28 19:09:59.060458 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:09:59.064412 | orchestrator | 2025-05-28 19:09:59.064438 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-28 19:09:59.064449 | orchestrator | Wednesday 28 May 2025 19:09:59 +0000 (0:00:00.171) 0:00:13.828 ********* 2025-05-28 19:09:59.232700 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:59.233445 | orchestrator | 2025-05-28 19:09:59.235050 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-28 19:09:59.236299 | orchestrator | Wednesday 28 May 2025 19:09:59 +0000 (0:00:00.175) 0:00:14.004 ********* 2025-05-28 19:09:59.385318 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:59.385890 | orchestrator | 2025-05-28 19:09:59.389895 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-28 19:09:59.389996 | orchestrator | Wednesday 28 May 2025 19:09:59 +0000 (0:00:00.151) 0:00:14.155 ********* 2025-05-28 19:09:59.751553 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:09:59.753760 | orchestrator | 2025-05-28 19:09:59.754076 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-28 19:09:59.755217 | orchestrator | Wednesday 28 May 2025 19:09:59 +0000 (0:00:00.367) 0:00:14.522 ********* 2025-05-28 19:09:59.906611 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 19:09:59.907154 | orchestrator |  "ceph_osd_devices": { 2025-05-28 19:09:59.908193 | orchestrator |  "sdb": { 2025-05-28 19:09:59.913675 | orchestrator |  "osd_lvm_uuid": "79c077cd-dd98-5cad-a8fa-86d8aa897eb3" 2025-05-28 19:09:59.914994 | orchestrator |  }, 2025-05-28 19:09:59.915322 | orchestrator |  "sdc": { 2025-05-28 19:09:59.918614 | orchestrator |  "osd_lvm_uuid": "117a45ef-4e6c-5b76-bea4-f0c196d92690" 2025-05-28 19:09:59.918741 | orchestrator |  } 2025-05-28 19:09:59.919909 | orchestrator |  } 2025-05-28 19:09:59.923647 | orchestrator | } 2025-05-28 19:09:59.924453 | orchestrator | 2025-05-28 19:09:59.925154 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-28 19:09:59.925985 | orchestrator | Wednesday 28 May 2025 19:09:59 +0000 (0:00:00.155) 0:00:14.677 ********* 2025-05-28 19:10:00.057430 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:10:00.058199 | orchestrator | 2025-05-28 19:10:00.059218 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-28 19:10:00.060338 | orchestrator | Wednesday 28 May 2025 19:10:00 +0000 (0:00:00.151) 0:00:14.829 ********* 2025-05-28 19:10:00.194601 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:10:00.195303 | orchestrator | 2025-05-28 19:10:00.196385 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-28 19:10:00.198418 | orchestrator | Wednesday 28 May 2025 19:10:00 +0000 (0:00:00.137) 0:00:14.966 ********* 2025-05-28 19:10:00.344641 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:10:00.345375 | orchestrator | 2025-05-28 19:10:00.346155 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-28 19:10:00.348423 | orchestrator | Wednesday 28 May 2025 19:10:00 +0000 (0:00:00.150) 0:00:15.117 ********* 2025-05-28 19:10:00.650185 | orchestrator | changed: [testbed-node-3] => { 2025-05-28 19:10:00.650583 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-28 19:10:00.650612 | orchestrator |  "ceph_osd_devices": { 2025-05-28 19:10:00.650625 | orchestrator |  "sdb": { 2025-05-28 19:10:00.651927 | orchestrator |  "osd_lvm_uuid": "79c077cd-dd98-5cad-a8fa-86d8aa897eb3" 2025-05-28 19:10:00.651954 | orchestrator |  }, 2025-05-28 19:10:00.651966 | orchestrator |  "sdc": { 2025-05-28 19:10:00.651978 | orchestrator |  "osd_lvm_uuid": "117a45ef-4e6c-5b76-bea4-f0c196d92690" 2025-05-28 19:10:00.652354 | orchestrator |  } 2025-05-28 19:10:00.652837 | orchestrator |  }, 2025-05-28 19:10:00.653544 | orchestrator |  "lvm_volumes": [ 2025-05-28 19:10:00.653847 | orchestrator |  { 2025-05-28 19:10:00.654137 | orchestrator |  "data": "osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3", 2025-05-28 19:10:00.655464 | orchestrator |  "data_vg": "ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3" 2025-05-28 19:10:00.655683 | orchestrator |  }, 2025-05-28 19:10:00.657001 | orchestrator |  { 2025-05-28 19:10:00.657120 | orchestrator |  "data": "osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690", 2025-05-28 19:10:00.658829 | orchestrator |  "data_vg": "ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690" 2025-05-28 19:10:00.661196 | orchestrator |  } 2025-05-28 19:10:00.661217 | orchestrator |  ] 2025-05-28 19:10:00.661229 | orchestrator |  } 2025-05-28 19:10:00.661241 | orchestrator | } 2025-05-28 19:10:00.661253 | orchestrator | 2025-05-28 19:10:00.661266 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-28 19:10:00.661305 | orchestrator | Wednesday 28 May 2025 19:10:00 +0000 (0:00:00.296) 0:00:15.413 ********* 2025-05-28 19:10:02.883534 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 19:10:02.883644 | orchestrator | 2025-05-28 19:10:02.883653 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-28 19:10:02.883658 | orchestrator | 2025-05-28 19:10:02.883693 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 19:10:02.883921 | orchestrator | Wednesday 28 May 2025 19:10:02 +0000 (0:00:02.240) 0:00:17.654 ********* 2025-05-28 19:10:03.145576 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-28 19:10:03.146954 | orchestrator | 2025-05-28 19:10:03.148041 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 19:10:03.148353 | orchestrator | Wednesday 28 May 2025 19:10:03 +0000 (0:00:00.264) 0:00:17.918 ********* 2025-05-28 19:10:03.406262 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:10:03.406792 | orchestrator | 2025-05-28 19:10:03.407251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:03.408095 | orchestrator | Wednesday 28 May 2025 19:10:03 +0000 (0:00:00.259) 0:00:18.178 ********* 2025-05-28 19:10:03.793659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-28 19:10:03.796059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-28 19:10:03.796447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-28 19:10:03.797189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-28 19:10:03.798517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-28 19:10:03.799646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-28 19:10:03.800942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-28 19:10:03.802350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-28 19:10:03.803419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-28 19:10:03.804200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-28 19:10:03.804535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-28 19:10:03.804979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-28 19:10:03.805784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-28 19:10:03.806301 | orchestrator | 2025-05-28 19:10:03.806434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:03.806949 | orchestrator | Wednesday 28 May 2025 19:10:03 +0000 (0:00:00.385) 0:00:18.563 ********* 2025-05-28 19:10:04.009760 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:04.010180 | orchestrator | 2025-05-28 19:10:04.010209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:04.011546 | orchestrator | Wednesday 28 May 2025 19:10:04 +0000 (0:00:00.218) 0:00:18.782 ********* 2025-05-28 19:10:04.220650 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:04.221058 | orchestrator | 2025-05-28 19:10:04.221482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:04.222950 | orchestrator | Wednesday 28 May 2025 19:10:04 +0000 (0:00:00.208) 0:00:18.991 ********* 2025-05-28 19:10:04.435859 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:04.437001 | orchestrator | 2025-05-28 19:10:04.439938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:04.440515 | orchestrator | Wednesday 28 May 2025 19:10:04 +0000 (0:00:00.214) 0:00:19.206 ********* 2025-05-28 19:10:05.014281 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:05.016401 | orchestrator | 2025-05-28 19:10:05.018204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:05.019089 | orchestrator | Wednesday 28 May 2025 19:10:05 +0000 (0:00:00.577) 0:00:19.783 ********* 2025-05-28 19:10:05.228684 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:05.228782 | orchestrator | 2025-05-28 19:10:05.229213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:05.229494 | orchestrator | Wednesday 28 May 2025 19:10:05 +0000 (0:00:00.216) 0:00:20.000 ********* 2025-05-28 19:10:05.451700 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:05.452504 | orchestrator | 2025-05-28 19:10:05.454082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:05.454690 | orchestrator | Wednesday 28 May 2025 19:10:05 +0000 (0:00:00.224) 0:00:20.224 ********* 2025-05-28 19:10:05.660372 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:05.662466 | orchestrator | 2025-05-28 19:10:05.663316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:05.664582 | orchestrator | Wednesday 28 May 2025 19:10:05 +0000 (0:00:00.202) 0:00:20.426 ********* 2025-05-28 19:10:05.871139 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:05.873607 | orchestrator | 2025-05-28 19:10:05.874208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:05.877070 | orchestrator | Wednesday 28 May 2025 19:10:05 +0000 (0:00:00.215) 0:00:20.642 ********* 2025-05-28 19:10:06.311148 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3) 2025-05-28 19:10:06.311873 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3) 2025-05-28 19:10:06.312528 | orchestrator | 2025-05-28 19:10:06.314062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:06.315349 | orchestrator | Wednesday 28 May 2025 19:10:06 +0000 (0:00:00.440) 0:00:21.082 ********* 2025-05-28 19:10:06.757687 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836) 2025-05-28 19:10:06.757978 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836) 2025-05-28 19:10:06.758006 | orchestrator | 2025-05-28 19:10:06.758602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:06.759043 | orchestrator | Wednesday 28 May 2025 19:10:06 +0000 (0:00:00.446) 0:00:21.529 ********* 2025-05-28 19:10:07.174605 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445) 2025-05-28 19:10:07.174689 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445) 2025-05-28 19:10:07.175284 | orchestrator | 2025-05-28 19:10:07.176276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:07.176647 | orchestrator | Wednesday 28 May 2025 19:10:07 +0000 (0:00:00.417) 0:00:21.947 ********* 2025-05-28 19:10:07.635970 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd) 2025-05-28 19:10:07.636646 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd) 2025-05-28 19:10:07.637302 | orchestrator | 2025-05-28 19:10:07.638200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:07.638532 | orchestrator | Wednesday 28 May 2025 19:10:07 +0000 (0:00:00.458) 0:00:22.405 ********* 2025-05-28 19:10:08.323208 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 19:10:08.323332 | orchestrator | 2025-05-28 19:10:08.323356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:08.323454 | orchestrator | Wednesday 28 May 2025 19:10:08 +0000 (0:00:00.688) 0:00:23.094 ********* 2025-05-28 19:10:09.201135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-28 19:10:09.202433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-28 19:10:09.205345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-28 19:10:09.207066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-28 19:10:09.207108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-28 19:10:09.207120 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-28 19:10:09.207691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-28 19:10:09.209355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-28 19:10:09.210525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-28 19:10:09.211081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-28 19:10:09.212054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-28 19:10:09.213834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-28 19:10:09.214586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-28 19:10:09.215970 | orchestrator | 2025-05-28 19:10:09.216415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:09.216985 | orchestrator | Wednesday 28 May 2025 19:10:09 +0000 (0:00:00.877) 0:00:23.971 ********* 2025-05-28 19:10:09.408497 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:09.408879 | orchestrator | 2025-05-28 19:10:09.409011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:09.410290 | orchestrator | Wednesday 28 May 2025 19:10:09 +0000 (0:00:00.208) 0:00:24.179 ********* 2025-05-28 19:10:09.610405 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:09.610586 | orchestrator | 2025-05-28 19:10:09.610906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:09.611138 | orchestrator | Wednesday 28 May 2025 19:10:09 +0000 (0:00:00.202) 0:00:24.382 ********* 2025-05-28 19:10:09.805614 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:09.805717 | orchestrator | 2025-05-28 19:10:09.808332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:09.808395 | orchestrator | Wednesday 28 May 2025 19:10:09 +0000 (0:00:00.193) 0:00:24.576 ********* 2025-05-28 19:10:10.032206 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:10.032311 | orchestrator | 2025-05-28 19:10:10.033072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:10.033290 | orchestrator | Wednesday 28 May 2025 19:10:10 +0000 (0:00:00.226) 0:00:24.802 ********* 2025-05-28 19:10:10.249569 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:10.254095 | orchestrator | 2025-05-28 19:10:10.254139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:10.254155 | orchestrator | Wednesday 28 May 2025 19:10:10 +0000 (0:00:00.218) 0:00:25.020 ********* 2025-05-28 19:10:10.517392 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:10.518960 | orchestrator | 2025-05-28 19:10:10.519501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:10.523455 | orchestrator | Wednesday 28 May 2025 19:10:10 +0000 (0:00:00.267) 0:00:25.288 ********* 2025-05-28 19:10:10.745732 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:10.746735 | orchestrator | 2025-05-28 19:10:10.747065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:10.747793 | orchestrator | Wednesday 28 May 2025 19:10:10 +0000 (0:00:00.230) 0:00:25.518 ********* 2025-05-28 19:10:10.960382 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:10.960492 | orchestrator | 2025-05-28 19:10:10.962112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:10.962255 | orchestrator | Wednesday 28 May 2025 19:10:10 +0000 (0:00:00.213) 0:00:25.731 ********* 2025-05-28 19:10:12.094105 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-28 19:10:12.095686 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-28 19:10:12.098079 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-28 19:10:12.098284 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-28 19:10:12.100187 | orchestrator | 2025-05-28 19:10:12.101487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:12.103435 | orchestrator | Wednesday 28 May 2025 19:10:12 +0000 (0:00:01.131) 0:00:26.863 ********* 2025-05-28 19:10:12.303903 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:12.305037 | orchestrator | 2025-05-28 19:10:12.305764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:12.306572 | orchestrator | Wednesday 28 May 2025 19:10:12 +0000 (0:00:00.212) 0:00:27.076 ********* 2025-05-28 19:10:12.533388 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:12.533886 | orchestrator | 2025-05-28 19:10:12.534774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:12.536179 | orchestrator | Wednesday 28 May 2025 19:10:12 +0000 (0:00:00.228) 0:00:27.304 ********* 2025-05-28 19:10:12.771218 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:12.771622 | orchestrator | 2025-05-28 19:10:12.772953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:12.773914 | orchestrator | Wednesday 28 May 2025 19:10:12 +0000 (0:00:00.238) 0:00:27.542 ********* 2025-05-28 19:10:12.991890 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:12.992438 | orchestrator | 2025-05-28 19:10:12.993662 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-28 19:10:12.994403 | orchestrator | Wednesday 28 May 2025 19:10:12 +0000 (0:00:00.220) 0:00:27.763 ********* 2025-05-28 19:10:13.183136 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-28 19:10:13.184020 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-28 19:10:13.185051 | orchestrator | 2025-05-28 19:10:13.185880 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-28 19:10:13.188059 | orchestrator | Wednesday 28 May 2025 19:10:13 +0000 (0:00:00.191) 0:00:27.955 ********* 2025-05-28 19:10:13.331998 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:13.332142 | orchestrator | 2025-05-28 19:10:13.332232 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-28 19:10:13.333570 | orchestrator | Wednesday 28 May 2025 19:10:13 +0000 (0:00:00.147) 0:00:28.102 ********* 2025-05-28 19:10:13.472847 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:13.472946 | orchestrator | 2025-05-28 19:10:13.473722 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-28 19:10:13.474418 | orchestrator | Wednesday 28 May 2025 19:10:13 +0000 (0:00:00.141) 0:00:28.244 ********* 2025-05-28 19:10:13.614925 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:13.615459 | orchestrator | 2025-05-28 19:10:13.617005 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-28 19:10:13.620091 | orchestrator | Wednesday 28 May 2025 19:10:13 +0000 (0:00:00.142) 0:00:28.387 ********* 2025-05-28 19:10:13.772678 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:10:13.772766 | orchestrator | 2025-05-28 19:10:13.773020 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-28 19:10:13.773410 | orchestrator | Wednesday 28 May 2025 19:10:13 +0000 (0:00:00.156) 0:00:28.543 ********* 2025-05-28 19:10:13.966912 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ed7399e-dc97-5c28-9f68-879666a39403'}}) 2025-05-28 19:10:13.966998 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0344b063-3cec-5ade-bfbf-9241287811af'}}) 2025-05-28 19:10:13.967188 | orchestrator | 2025-05-28 19:10:13.967320 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-28 19:10:13.967936 | orchestrator | Wednesday 28 May 2025 19:10:13 +0000 (0:00:00.194) 0:00:28.738 ********* 2025-05-28 19:10:14.134532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ed7399e-dc97-5c28-9f68-879666a39403'}})  2025-05-28 19:10:14.134740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0344b063-3cec-5ade-bfbf-9241287811af'}})  2025-05-28 19:10:14.135360 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:14.136130 | orchestrator | 2025-05-28 19:10:14.136566 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-28 19:10:14.137847 | orchestrator | Wednesday 28 May 2025 19:10:14 +0000 (0:00:00.169) 0:00:28.907 ********* 2025-05-28 19:10:14.510377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ed7399e-dc97-5c28-9f68-879666a39403'}})  2025-05-28 19:10:14.511020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0344b063-3cec-5ade-bfbf-9241287811af'}})  2025-05-28 19:10:14.512427 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:14.513613 | orchestrator | 2025-05-28 19:10:14.514343 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-28 19:10:14.515176 | orchestrator | Wednesday 28 May 2025 19:10:14 +0000 (0:00:00.374) 0:00:29.282 ********* 2025-05-28 19:10:14.695424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ed7399e-dc97-5c28-9f68-879666a39403'}})  2025-05-28 19:10:14.695561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0344b063-3cec-5ade-bfbf-9241287811af'}})  2025-05-28 19:10:14.696494 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:14.697562 | orchestrator | 2025-05-28 19:10:14.698204 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-28 19:10:14.698858 | orchestrator | Wednesday 28 May 2025 19:10:14 +0000 (0:00:00.186) 0:00:29.468 ********* 2025-05-28 19:10:14.850004 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:10:14.851131 | orchestrator | 2025-05-28 19:10:14.851760 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-28 19:10:14.853405 | orchestrator | Wednesday 28 May 2025 19:10:14 +0000 (0:00:00.154) 0:00:29.622 ********* 2025-05-28 19:10:15.005207 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:10:15.006647 | orchestrator | 2025-05-28 19:10:15.008939 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-28 19:10:15.010280 | orchestrator | Wednesday 28 May 2025 19:10:14 +0000 (0:00:00.151) 0:00:29.774 ********* 2025-05-28 19:10:15.148974 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:15.150839 | orchestrator | 2025-05-28 19:10:15.153188 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-28 19:10:15.154106 | orchestrator | Wednesday 28 May 2025 19:10:15 +0000 (0:00:00.145) 0:00:29.920 ********* 2025-05-28 19:10:15.288857 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:15.289252 | orchestrator | 2025-05-28 19:10:15.289753 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-28 19:10:15.289966 | orchestrator | Wednesday 28 May 2025 19:10:15 +0000 (0:00:00.140) 0:00:30.060 ********* 2025-05-28 19:10:15.446218 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:15.447356 | orchestrator | 2025-05-28 19:10:15.449079 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-28 19:10:15.449380 | orchestrator | Wednesday 28 May 2025 19:10:15 +0000 (0:00:00.154) 0:00:30.215 ********* 2025-05-28 19:10:15.591602 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 19:10:15.592901 | orchestrator |  "ceph_osd_devices": { 2025-05-28 19:10:15.594087 | orchestrator |  "sdb": { 2025-05-28 19:10:15.595269 | orchestrator |  "osd_lvm_uuid": "3ed7399e-dc97-5c28-9f68-879666a39403" 2025-05-28 19:10:15.596249 | orchestrator |  }, 2025-05-28 19:10:15.596920 | orchestrator |  "sdc": { 2025-05-28 19:10:15.597941 | orchestrator |  "osd_lvm_uuid": "0344b063-3cec-5ade-bfbf-9241287811af" 2025-05-28 19:10:15.598766 | orchestrator |  } 2025-05-28 19:10:15.599089 | orchestrator |  } 2025-05-28 19:10:15.599984 | orchestrator | } 2025-05-28 19:10:15.600440 | orchestrator | 2025-05-28 19:10:15.600537 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-28 19:10:15.600783 | orchestrator | Wednesday 28 May 2025 19:10:15 +0000 (0:00:00.148) 0:00:30.363 ********* 2025-05-28 19:10:15.738982 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:15.739493 | orchestrator | 2025-05-28 19:10:15.739738 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-28 19:10:15.740511 | orchestrator | Wednesday 28 May 2025 19:10:15 +0000 (0:00:00.147) 0:00:30.511 ********* 2025-05-28 19:10:15.883074 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:15.883239 | orchestrator | 2025-05-28 19:10:15.884125 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-28 19:10:15.885047 | orchestrator | Wednesday 28 May 2025 19:10:15 +0000 (0:00:00.142) 0:00:30.653 ********* 2025-05-28 19:10:16.021478 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:10:16.021563 | orchestrator | 2025-05-28 19:10:16.022187 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-28 19:10:16.022556 | orchestrator | Wednesday 28 May 2025 19:10:16 +0000 (0:00:00.140) 0:00:30.794 ********* 2025-05-28 19:10:16.507289 | orchestrator | changed: [testbed-node-4] => { 2025-05-28 19:10:16.508154 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-28 19:10:16.509155 | orchestrator |  "ceph_osd_devices": { 2025-05-28 19:10:16.510083 | orchestrator |  "sdb": { 2025-05-28 19:10:16.510735 | orchestrator |  "osd_lvm_uuid": "3ed7399e-dc97-5c28-9f68-879666a39403" 2025-05-28 19:10:16.512154 | orchestrator |  }, 2025-05-28 19:10:16.513220 | orchestrator |  "sdc": { 2025-05-28 19:10:16.513748 | orchestrator |  "osd_lvm_uuid": "0344b063-3cec-5ade-bfbf-9241287811af" 2025-05-28 19:10:16.514725 | orchestrator |  } 2025-05-28 19:10:16.514991 | orchestrator |  }, 2025-05-28 19:10:16.516240 | orchestrator |  "lvm_volumes": [ 2025-05-28 19:10:16.516310 | orchestrator |  { 2025-05-28 19:10:16.517387 | orchestrator |  "data": "osd-block-3ed7399e-dc97-5c28-9f68-879666a39403", 2025-05-28 19:10:16.517472 | orchestrator |  "data_vg": "ceph-3ed7399e-dc97-5c28-9f68-879666a39403" 2025-05-28 19:10:16.518136 | orchestrator |  }, 2025-05-28 19:10:16.518386 | orchestrator |  { 2025-05-28 19:10:16.519175 | orchestrator |  "data": "osd-block-0344b063-3cec-5ade-bfbf-9241287811af", 2025-05-28 19:10:16.519198 | orchestrator |  "data_vg": "ceph-0344b063-3cec-5ade-bfbf-9241287811af" 2025-05-28 19:10:16.519372 | orchestrator |  } 2025-05-28 19:10:16.519752 | orchestrator |  ] 2025-05-28 19:10:16.520480 | orchestrator |  } 2025-05-28 19:10:16.520504 | orchestrator | } 2025-05-28 19:10:16.521000 | orchestrator | 2025-05-28 19:10:16.522869 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-28 19:10:16.522894 | orchestrator | Wednesday 28 May 2025 19:10:16 +0000 (0:00:00.485) 0:00:31.279 ********* 2025-05-28 19:10:17.922881 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-28 19:10:17.923595 | orchestrator | 2025-05-28 19:10:17.924657 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-28 19:10:17.927288 | orchestrator | 2025-05-28 19:10:17.928284 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 19:10:17.929285 | orchestrator | Wednesday 28 May 2025 19:10:17 +0000 (0:00:01.413) 0:00:32.693 ********* 2025-05-28 19:10:18.202285 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-28 19:10:18.202462 | orchestrator | 2025-05-28 19:10:18.202763 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 19:10:18.203248 | orchestrator | Wednesday 28 May 2025 19:10:18 +0000 (0:00:00.281) 0:00:32.974 ********* 2025-05-28 19:10:18.852055 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:10:18.852366 | orchestrator | 2025-05-28 19:10:18.853260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:18.854463 | orchestrator | Wednesday 28 May 2025 19:10:18 +0000 (0:00:00.649) 0:00:33.624 ********* 2025-05-28 19:10:19.275271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-28 19:10:19.275949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-28 19:10:19.279169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-28 19:10:19.279225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-28 19:10:19.279237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-28 19:10:19.279249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-28 19:10:19.279310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-28 19:10:19.279762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-28 19:10:19.280201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-28 19:10:19.280954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-28 19:10:19.281428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-28 19:10:19.281706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-28 19:10:19.282123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-28 19:10:19.282336 | orchestrator | 2025-05-28 19:10:19.282581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:19.283687 | orchestrator | Wednesday 28 May 2025 19:10:19 +0000 (0:00:00.422) 0:00:34.046 ********* 2025-05-28 19:10:19.487196 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:19.487353 | orchestrator | 2025-05-28 19:10:19.487969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:19.489032 | orchestrator | Wednesday 28 May 2025 19:10:19 +0000 (0:00:00.212) 0:00:34.259 ********* 2025-05-28 19:10:19.708055 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:19.708658 | orchestrator | 2025-05-28 19:10:19.709359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:19.710145 | orchestrator | Wednesday 28 May 2025 19:10:19 +0000 (0:00:00.221) 0:00:34.480 ********* 2025-05-28 19:10:19.915559 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:19.916157 | orchestrator | 2025-05-28 19:10:19.917210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:19.918448 | orchestrator | Wednesday 28 May 2025 19:10:19 +0000 (0:00:00.207) 0:00:34.688 ********* 2025-05-28 19:10:20.139300 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:20.139966 | orchestrator | 2025-05-28 19:10:20.140858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:20.141672 | orchestrator | Wednesday 28 May 2025 19:10:20 +0000 (0:00:00.220) 0:00:34.908 ********* 2025-05-28 19:10:20.343504 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:20.343858 | orchestrator | 2025-05-28 19:10:20.345247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:20.346207 | orchestrator | Wednesday 28 May 2025 19:10:20 +0000 (0:00:00.206) 0:00:35.115 ********* 2025-05-28 19:10:20.556203 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:20.557045 | orchestrator | 2025-05-28 19:10:20.557890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:20.559114 | orchestrator | Wednesday 28 May 2025 19:10:20 +0000 (0:00:00.210) 0:00:35.326 ********* 2025-05-28 19:10:20.761655 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:20.761861 | orchestrator | 2025-05-28 19:10:20.763049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:20.765332 | orchestrator | Wednesday 28 May 2025 19:10:20 +0000 (0:00:00.206) 0:00:35.532 ********* 2025-05-28 19:10:20.972888 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:20.973374 | orchestrator | 2025-05-28 19:10:20.974973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:20.975858 | orchestrator | Wednesday 28 May 2025 19:10:20 +0000 (0:00:00.211) 0:00:35.743 ********* 2025-05-28 19:10:21.862662 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87) 2025-05-28 19:10:21.863432 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87) 2025-05-28 19:10:21.864875 | orchestrator | 2025-05-28 19:10:21.865237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:21.866126 | orchestrator | Wednesday 28 May 2025 19:10:21 +0000 (0:00:00.891) 0:00:36.634 ********* 2025-05-28 19:10:22.300783 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1) 2025-05-28 19:10:22.300944 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1) 2025-05-28 19:10:22.300960 | orchestrator | 2025-05-28 19:10:22.301035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:22.301397 | orchestrator | Wednesday 28 May 2025 19:10:22 +0000 (0:00:00.435) 0:00:37.070 ********* 2025-05-28 19:10:22.736877 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d) 2025-05-28 19:10:22.737710 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d) 2025-05-28 19:10:22.738352 | orchestrator | 2025-05-28 19:10:22.739123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:22.740240 | orchestrator | Wednesday 28 May 2025 19:10:22 +0000 (0:00:00.436) 0:00:37.507 ********* 2025-05-28 19:10:23.186988 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d) 2025-05-28 19:10:23.187235 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d) 2025-05-28 19:10:23.187744 | orchestrator | 2025-05-28 19:10:23.189680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:10:23.190310 | orchestrator | Wednesday 28 May 2025 19:10:23 +0000 (0:00:00.451) 0:00:37.958 ********* 2025-05-28 19:10:23.529031 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 19:10:23.529554 | orchestrator | 2025-05-28 19:10:23.530576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:23.531449 | orchestrator | Wednesday 28 May 2025 19:10:23 +0000 (0:00:00.342) 0:00:38.300 ********* 2025-05-28 19:10:23.949771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-28 19:10:23.950297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-28 19:10:23.951636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-28 19:10:23.952617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-28 19:10:23.953790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-28 19:10:23.954249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-28 19:10:23.955142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-28 19:10:23.956112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-28 19:10:23.956869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-28 19:10:23.957088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-28 19:10:23.957431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-28 19:10:23.957874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-28 19:10:23.958295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-28 19:10:23.958541 | orchestrator | 2025-05-28 19:10:23.958946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:23.959553 | orchestrator | Wednesday 28 May 2025 19:10:23 +0000 (0:00:00.420) 0:00:38.721 ********* 2025-05-28 19:10:24.165499 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:24.165606 | orchestrator | 2025-05-28 19:10:24.165831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:24.166671 | orchestrator | Wednesday 28 May 2025 19:10:24 +0000 (0:00:00.213) 0:00:38.935 ********* 2025-05-28 19:10:24.372862 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:24.373665 | orchestrator | 2025-05-28 19:10:24.374486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:24.375588 | orchestrator | Wednesday 28 May 2025 19:10:24 +0000 (0:00:00.208) 0:00:39.144 ********* 2025-05-28 19:10:24.597964 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:24.598167 | orchestrator | 2025-05-28 19:10:24.599254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:24.599490 | orchestrator | Wednesday 28 May 2025 19:10:24 +0000 (0:00:00.225) 0:00:39.369 ********* 2025-05-28 19:10:25.229744 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:25.231183 | orchestrator | 2025-05-28 19:10:25.232272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:25.233067 | orchestrator | Wednesday 28 May 2025 19:10:25 +0000 (0:00:00.629) 0:00:39.999 ********* 2025-05-28 19:10:25.450233 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:25.455268 | orchestrator | 2025-05-28 19:10:25.455313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:25.455328 | orchestrator | Wednesday 28 May 2025 19:10:25 +0000 (0:00:00.221) 0:00:40.220 ********* 2025-05-28 19:10:25.657873 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:25.657974 | orchestrator | 2025-05-28 19:10:25.658190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:25.659246 | orchestrator | Wednesday 28 May 2025 19:10:25 +0000 (0:00:00.208) 0:00:40.429 ********* 2025-05-28 19:10:25.881093 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:25.881316 | orchestrator | 2025-05-28 19:10:25.881731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:25.882440 | orchestrator | Wednesday 28 May 2025 19:10:25 +0000 (0:00:00.223) 0:00:40.653 ********* 2025-05-28 19:10:26.105612 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:26.106108 | orchestrator | 2025-05-28 19:10:26.107111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:26.109879 | orchestrator | Wednesday 28 May 2025 19:10:26 +0000 (0:00:00.223) 0:00:40.877 ********* 2025-05-28 19:10:26.805735 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-28 19:10:26.806759 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-28 19:10:26.807692 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-28 19:10:26.808976 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-28 19:10:26.809317 | orchestrator | 2025-05-28 19:10:26.811120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:26.811210 | orchestrator | Wednesday 28 May 2025 19:10:26 +0000 (0:00:00.701) 0:00:41.578 ********* 2025-05-28 19:10:27.010991 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:27.012669 | orchestrator | 2025-05-28 19:10:27.012859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:27.013233 | orchestrator | Wednesday 28 May 2025 19:10:27 +0000 (0:00:00.204) 0:00:41.782 ********* 2025-05-28 19:10:27.235638 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:27.235863 | orchestrator | 2025-05-28 19:10:27.236991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:27.239598 | orchestrator | Wednesday 28 May 2025 19:10:27 +0000 (0:00:00.224) 0:00:42.006 ********* 2025-05-28 19:10:27.430296 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:27.430602 | orchestrator | 2025-05-28 19:10:27.431982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:10:27.433290 | orchestrator | Wednesday 28 May 2025 19:10:27 +0000 (0:00:00.195) 0:00:42.202 ********* 2025-05-28 19:10:27.666318 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:27.666493 | orchestrator | 2025-05-28 19:10:27.666957 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-28 19:10:27.667476 | orchestrator | Wednesday 28 May 2025 19:10:27 +0000 (0:00:00.236) 0:00:42.439 ********* 2025-05-28 19:10:27.834739 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-28 19:10:27.835036 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-28 19:10:27.835942 | orchestrator | 2025-05-28 19:10:27.836715 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-28 19:10:27.837397 | orchestrator | Wednesday 28 May 2025 19:10:27 +0000 (0:00:00.167) 0:00:42.607 ********* 2025-05-28 19:10:28.197789 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:28.200114 | orchestrator | 2025-05-28 19:10:28.200180 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-28 19:10:28.200201 | orchestrator | Wednesday 28 May 2025 19:10:28 +0000 (0:00:00.360) 0:00:42.967 ********* 2025-05-28 19:10:28.331131 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:28.331538 | orchestrator | 2025-05-28 19:10:28.332409 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-28 19:10:28.334190 | orchestrator | Wednesday 28 May 2025 19:10:28 +0000 (0:00:00.135) 0:00:43.103 ********* 2025-05-28 19:10:28.488073 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:28.488784 | orchestrator | 2025-05-28 19:10:28.489248 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-28 19:10:28.490345 | orchestrator | Wednesday 28 May 2025 19:10:28 +0000 (0:00:00.155) 0:00:43.258 ********* 2025-05-28 19:10:28.624272 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:10:28.625486 | orchestrator | 2025-05-28 19:10:28.627476 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-28 19:10:28.628530 | orchestrator | Wednesday 28 May 2025 19:10:28 +0000 (0:00:00.137) 0:00:43.396 ********* 2025-05-28 19:10:28.820651 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5db078c0-6128-52c2-9305-54ff671eda75'}}) 2025-05-28 19:10:28.821715 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}}) 2025-05-28 19:10:28.822451 | orchestrator | 2025-05-28 19:10:28.823513 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-28 19:10:28.824944 | orchestrator | Wednesday 28 May 2025 19:10:28 +0000 (0:00:00.197) 0:00:43.593 ********* 2025-05-28 19:10:28.995180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5db078c0-6128-52c2-9305-54ff671eda75'}})  2025-05-28 19:10:28.995585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}})  2025-05-28 19:10:28.997659 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:28.998578 | orchestrator | 2025-05-28 19:10:28.999681 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-28 19:10:29.001953 | orchestrator | Wednesday 28 May 2025 19:10:28 +0000 (0:00:00.172) 0:00:43.766 ********* 2025-05-28 19:10:29.186669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5db078c0-6128-52c2-9305-54ff671eda75'}})  2025-05-28 19:10:29.188200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}})  2025-05-28 19:10:29.188234 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:29.188247 | orchestrator | 2025-05-28 19:10:29.188893 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-28 19:10:29.190414 | orchestrator | Wednesday 28 May 2025 19:10:29 +0000 (0:00:00.190) 0:00:43.957 ********* 2025-05-28 19:10:29.359244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5db078c0-6128-52c2-9305-54ff671eda75'}})  2025-05-28 19:10:29.359758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}})  2025-05-28 19:10:29.360332 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:29.361852 | orchestrator | 2025-05-28 19:10:29.364550 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-28 19:10:29.364635 | orchestrator | Wednesday 28 May 2025 19:10:29 +0000 (0:00:00.173) 0:00:44.130 ********* 2025-05-28 19:10:29.510246 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:10:29.510371 | orchestrator | 2025-05-28 19:10:29.511814 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-28 19:10:29.512982 | orchestrator | Wednesday 28 May 2025 19:10:29 +0000 (0:00:00.150) 0:00:44.280 ********* 2025-05-28 19:10:29.653415 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:10:29.654933 | orchestrator | 2025-05-28 19:10:29.656425 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-28 19:10:29.657346 | orchestrator | Wednesday 28 May 2025 19:10:29 +0000 (0:00:00.145) 0:00:44.425 ********* 2025-05-28 19:10:29.809853 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:29.810593 | orchestrator | 2025-05-28 19:10:29.810979 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-28 19:10:29.812014 | orchestrator | Wednesday 28 May 2025 19:10:29 +0000 (0:00:00.154) 0:00:44.580 ********* 2025-05-28 19:10:30.179072 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:30.181151 | orchestrator | 2025-05-28 19:10:30.181230 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-28 19:10:30.182244 | orchestrator | Wednesday 28 May 2025 19:10:30 +0000 (0:00:00.370) 0:00:44.950 ********* 2025-05-28 19:10:30.318948 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:30.319123 | orchestrator | 2025-05-28 19:10:30.319618 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-28 19:10:30.320187 | orchestrator | Wednesday 28 May 2025 19:10:30 +0000 (0:00:00.140) 0:00:45.091 ********* 2025-05-28 19:10:30.475443 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 19:10:30.478606 | orchestrator |  "ceph_osd_devices": { 2025-05-28 19:10:30.482191 | orchestrator |  "sdb": { 2025-05-28 19:10:30.483317 | orchestrator |  "osd_lvm_uuid": "5db078c0-6128-52c2-9305-54ff671eda75" 2025-05-28 19:10:30.484896 | orchestrator |  }, 2025-05-28 19:10:30.486305 | orchestrator |  "sdc": { 2025-05-28 19:10:30.486961 | orchestrator |  "osd_lvm_uuid": "fda1a2ce-c0e6-5c69-aaa5-109883ddc076" 2025-05-28 19:10:30.487235 | orchestrator |  } 2025-05-28 19:10:30.487893 | orchestrator |  } 2025-05-28 19:10:30.488260 | orchestrator | } 2025-05-28 19:10:30.488857 | orchestrator | 2025-05-28 19:10:30.489365 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-28 19:10:30.489901 | orchestrator | Wednesday 28 May 2025 19:10:30 +0000 (0:00:00.156) 0:00:45.247 ********* 2025-05-28 19:10:30.626873 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:30.627924 | orchestrator | 2025-05-28 19:10:30.629617 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-28 19:10:30.630169 | orchestrator | Wednesday 28 May 2025 19:10:30 +0000 (0:00:00.150) 0:00:45.397 ********* 2025-05-28 19:10:30.758610 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:30.760462 | orchestrator | 2025-05-28 19:10:30.761645 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-28 19:10:30.764174 | orchestrator | Wednesday 28 May 2025 19:10:30 +0000 (0:00:00.130) 0:00:45.528 ********* 2025-05-28 19:10:30.909308 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:10:30.909904 | orchestrator | 2025-05-28 19:10:30.910865 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-28 19:10:30.911861 | orchestrator | Wednesday 28 May 2025 19:10:30 +0000 (0:00:00.153) 0:00:45.681 ********* 2025-05-28 19:10:31.191459 | orchestrator | changed: [testbed-node-5] => { 2025-05-28 19:10:31.192636 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-28 19:10:31.192969 | orchestrator |  "ceph_osd_devices": { 2025-05-28 19:10:31.193655 | orchestrator |  "sdb": { 2025-05-28 19:10:31.194544 | orchestrator |  "osd_lvm_uuid": "5db078c0-6128-52c2-9305-54ff671eda75" 2025-05-28 19:10:31.195089 | orchestrator |  }, 2025-05-28 19:10:31.195852 | orchestrator |  "sdc": { 2025-05-28 19:10:31.196698 | orchestrator |  "osd_lvm_uuid": "fda1a2ce-c0e6-5c69-aaa5-109883ddc076" 2025-05-28 19:10:31.197277 | orchestrator |  } 2025-05-28 19:10:31.197978 | orchestrator |  }, 2025-05-28 19:10:31.198292 | orchestrator |  "lvm_volumes": [ 2025-05-28 19:10:31.199036 | orchestrator |  { 2025-05-28 19:10:31.199540 | orchestrator |  "data": "osd-block-5db078c0-6128-52c2-9305-54ff671eda75", 2025-05-28 19:10:31.200069 | orchestrator |  "data_vg": "ceph-5db078c0-6128-52c2-9305-54ff671eda75" 2025-05-28 19:10:31.200741 | orchestrator |  }, 2025-05-28 19:10:31.201256 | orchestrator |  { 2025-05-28 19:10:31.201725 | orchestrator |  "data": "osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076", 2025-05-28 19:10:31.202230 | orchestrator |  "data_vg": "ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076" 2025-05-28 19:10:31.202659 | orchestrator |  } 2025-05-28 19:10:31.203225 | orchestrator |  ] 2025-05-28 19:10:31.203554 | orchestrator |  } 2025-05-28 19:10:31.204125 | orchestrator | } 2025-05-28 19:10:31.204592 | orchestrator | 2025-05-28 19:10:31.205097 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-28 19:10:31.205631 | orchestrator | Wednesday 28 May 2025 19:10:31 +0000 (0:00:00.279) 0:00:45.960 ********* 2025-05-28 19:10:32.513204 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-28 19:10:32.513897 | orchestrator | 2025-05-28 19:10:32.515120 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:10:32.515170 | orchestrator | 2025-05-28 19:10:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:10:32.515186 | orchestrator | 2025-05-28 19:10:32 | INFO  | Please wait and do not abort execution. 2025-05-28 19:10:32.515383 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 19:10:32.515954 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 19:10:32.516643 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 19:10:32.517611 | orchestrator | 2025-05-28 19:10:32.518312 | orchestrator | 2025-05-28 19:10:32.519180 | orchestrator | 2025-05-28 19:10:32.520369 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:10:32.521560 | orchestrator | Wednesday 28 May 2025 19:10:32 +0000 (0:00:01.324) 0:00:47.284 ********* 2025-05-28 19:10:32.523007 | orchestrator | =============================================================================== 2025-05-28 19:10:32.523192 | orchestrator | Write configuration file ------------------------------------------------ 4.98s 2025-05-28 19:10:32.523981 | orchestrator | Add known partitions to the list of available block devices ------------- 1.82s 2025-05-28 19:10:32.524403 | orchestrator | Add known links to the list of available block devices ------------------ 1.36s 2025-05-28 19:10:32.524897 | orchestrator | Get initial list of available block devices ----------------------------- 1.17s 2025-05-28 19:10:32.525350 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-05-28 19:10:32.525871 | orchestrator | Print configuration data ------------------------------------------------ 1.06s 2025-05-28 19:10:32.526156 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2025-05-28 19:10:32.526694 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-05-28 19:10:32.527090 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2025-05-28 19:10:32.527422 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-28 19:10:32.528240 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.78s 2025-05-28 19:10:32.528633 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-05-28 19:10:32.528970 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-05-28 19:10:32.529904 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-05-28 19:10:32.530315 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-05-28 19:10:32.532073 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.70s 2025-05-28 19:10:32.532243 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-05-28 19:10:32.533011 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.66s 2025-05-28 19:10:32.533561 | orchestrator | Set WAL devices config data --------------------------------------------- 0.66s 2025-05-28 19:10:32.534105 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.66s 2025-05-28 19:10:44.647297 | orchestrator | 2025-05-28 19:10:44 | INFO  | Task deb192cd-0f01-42f8-a406-ffc7e2ae9ef8 is running in background. Output coming soon. 2025-05-28 19:11:10.294892 | orchestrator | 2025-05-28 19:11:01 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-28 19:11:10.295054 | orchestrator | 2025-05-28 19:11:01 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-28 19:11:10.295101 | orchestrator | 2025-05-28 19:11:01 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-28 19:11:10.295121 | orchestrator | 2025-05-28 19:11:01 | INFO  | Handling group overwrites in 99-overwrite 2025-05-28 19:11:10.295140 | orchestrator | 2025-05-28 19:11:01 | INFO  | Removing group frr:children from 60-generic 2025-05-28 19:11:10.295166 | orchestrator | 2025-05-28 19:11:01 | INFO  | Removing group storage:children from 50-kolla 2025-05-28 19:11:10.295184 | orchestrator | 2025-05-28 19:11:01 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-28 19:11:10.295201 | orchestrator | 2025-05-28 19:11:01 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-28 19:11:10.295219 | orchestrator | 2025-05-28 19:11:01 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-28 19:11:10.295235 | orchestrator | 2025-05-28 19:11:01 | INFO  | Handling group overwrites in 20-roles 2025-05-28 19:11:10.295252 | orchestrator | 2025-05-28 19:11:01 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-28 19:11:10.295268 | orchestrator | 2025-05-28 19:11:02 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-28 19:11:10.295283 | orchestrator | 2025-05-28 19:11:10 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-28 19:11:12.018512 | orchestrator | 2025-05-28 19:11:12 | INFO  | Task 26353bbf-5f97-4ac7-8897-70e64b8b271e (ceph-create-lvm-devices) was prepared for execution. 2025-05-28 19:11:12.018595 | orchestrator | 2025-05-28 19:11:12 | INFO  | It takes a moment until task 26353bbf-5f97-4ac7-8897-70e64b8b271e (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-28 19:11:14.946314 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-28 19:11:15.462222 | orchestrator | 2025-05-28 19:11:15.462807 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-28 19:11:15.463756 | orchestrator | 2025-05-28 19:11:15.466274 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 19:11:15.466309 | orchestrator | Wednesday 28 May 2025 19:11:15 +0000 (0:00:00.449) 0:00:00.449 ********* 2025-05-28 19:11:15.703935 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-28 19:11:15.704023 | orchestrator | 2025-05-28 19:11:15.704117 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 19:11:15.704821 | orchestrator | Wednesday 28 May 2025 19:11:15 +0000 (0:00:00.243) 0:00:00.692 ********* 2025-05-28 19:11:15.936408 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:15.936623 | orchestrator | 2025-05-28 19:11:15.936646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:15.937086 | orchestrator | Wednesday 28 May 2025 19:11:15 +0000 (0:00:00.229) 0:00:00.922 ********* 2025-05-28 19:11:16.696459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-28 19:11:16.696556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-28 19:11:16.697023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-28 19:11:16.697255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-28 19:11:16.700951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-28 19:11:16.701527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-28 19:11:16.702085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-28 19:11:16.702944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-28 19:11:16.703975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-28 19:11:16.704502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-28 19:11:16.705426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-28 19:11:16.706541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-28 19:11:16.706607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-28 19:11:16.707063 | orchestrator | 2025-05-28 19:11:16.707288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:16.707735 | orchestrator | Wednesday 28 May 2025 19:11:16 +0000 (0:00:00.761) 0:00:01.684 ********* 2025-05-28 19:11:16.895912 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:16.896027 | orchestrator | 2025-05-28 19:11:16.896267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:16.896932 | orchestrator | Wednesday 28 May 2025 19:11:16 +0000 (0:00:00.199) 0:00:01.883 ********* 2025-05-28 19:11:17.106179 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:17.106352 | orchestrator | 2025-05-28 19:11:17.106918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:17.108208 | orchestrator | Wednesday 28 May 2025 19:11:17 +0000 (0:00:00.210) 0:00:02.093 ********* 2025-05-28 19:11:17.321072 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:17.321564 | orchestrator | 2025-05-28 19:11:17.325352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:17.325497 | orchestrator | Wednesday 28 May 2025 19:11:17 +0000 (0:00:00.214) 0:00:02.308 ********* 2025-05-28 19:11:17.513984 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:17.515565 | orchestrator | 2025-05-28 19:11:17.519611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:17.523079 | orchestrator | Wednesday 28 May 2025 19:11:17 +0000 (0:00:00.193) 0:00:02.502 ********* 2025-05-28 19:11:17.719255 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:17.720203 | orchestrator | 2025-05-28 19:11:17.722002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:17.724749 | orchestrator | Wednesday 28 May 2025 19:11:17 +0000 (0:00:00.204) 0:00:02.706 ********* 2025-05-28 19:11:17.921221 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:17.921316 | orchestrator | 2025-05-28 19:11:17.922556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:17.923126 | orchestrator | Wednesday 28 May 2025 19:11:17 +0000 (0:00:00.202) 0:00:02.909 ********* 2025-05-28 19:11:18.120039 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:18.120152 | orchestrator | 2025-05-28 19:11:18.121181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:18.121753 | orchestrator | Wednesday 28 May 2025 19:11:18 +0000 (0:00:00.198) 0:00:03.108 ********* 2025-05-28 19:11:18.324703 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:18.325295 | orchestrator | 2025-05-28 19:11:18.326116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:18.327218 | orchestrator | Wednesday 28 May 2025 19:11:18 +0000 (0:00:00.204) 0:00:03.313 ********* 2025-05-28 19:11:18.959987 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e) 2025-05-28 19:11:18.960813 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e) 2025-05-28 19:11:18.961704 | orchestrator | 2025-05-28 19:11:18.962841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:18.963720 | orchestrator | Wednesday 28 May 2025 19:11:18 +0000 (0:00:00.631) 0:00:03.944 ********* 2025-05-28 19:11:19.757563 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801) 2025-05-28 19:11:19.758271 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801) 2025-05-28 19:11:19.759251 | orchestrator | 2025-05-28 19:11:19.763399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:19.763433 | orchestrator | Wednesday 28 May 2025 19:11:19 +0000 (0:00:00.801) 0:00:04.746 ********* 2025-05-28 19:11:20.234850 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4) 2025-05-28 19:11:20.235555 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4) 2025-05-28 19:11:20.236955 | orchestrator | 2025-05-28 19:11:20.237203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:20.238165 | orchestrator | Wednesday 28 May 2025 19:11:20 +0000 (0:00:00.476) 0:00:05.223 ********* 2025-05-28 19:11:20.663301 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1) 2025-05-28 19:11:20.663921 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1) 2025-05-28 19:11:20.664258 | orchestrator | 2025-05-28 19:11:20.665200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:20.666138 | orchestrator | Wednesday 28 May 2025 19:11:20 +0000 (0:00:00.427) 0:00:05.651 ********* 2025-05-28 19:11:21.018135 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 19:11:21.018236 | orchestrator | 2025-05-28 19:11:21.018493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:21.018851 | orchestrator | Wednesday 28 May 2025 19:11:21 +0000 (0:00:00.354) 0:00:06.005 ********* 2025-05-28 19:11:21.507571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-28 19:11:21.508555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-28 19:11:21.509093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-28 19:11:21.510084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-28 19:11:21.510747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-28 19:11:21.511271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-28 19:11:21.511951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-28 19:11:21.512693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-28 19:11:21.514106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-28 19:11:21.514686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-28 19:11:21.515350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-28 19:11:21.516132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-28 19:11:21.516564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-28 19:11:21.517640 | orchestrator | 2025-05-28 19:11:21.518464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:21.519116 | orchestrator | Wednesday 28 May 2025 19:11:21 +0000 (0:00:00.490) 0:00:06.496 ********* 2025-05-28 19:11:21.728053 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:21.729105 | orchestrator | 2025-05-28 19:11:21.732362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:21.732395 | orchestrator | Wednesday 28 May 2025 19:11:21 +0000 (0:00:00.219) 0:00:06.716 ********* 2025-05-28 19:11:21.923671 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:21.924686 | orchestrator | 2025-05-28 19:11:21.925884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:21.927179 | orchestrator | Wednesday 28 May 2025 19:11:21 +0000 (0:00:00.194) 0:00:06.910 ********* 2025-05-28 19:11:22.123890 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:22.124297 | orchestrator | 2025-05-28 19:11:22.124731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:22.125291 | orchestrator | Wednesday 28 May 2025 19:11:22 +0000 (0:00:00.202) 0:00:07.113 ********* 2025-05-28 19:11:22.352695 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:22.352779 | orchestrator | 2025-05-28 19:11:22.352981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:22.354177 | orchestrator | Wednesday 28 May 2025 19:11:22 +0000 (0:00:00.228) 0:00:07.341 ********* 2025-05-28 19:11:22.945887 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:22.945992 | orchestrator | 2025-05-28 19:11:22.946008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:22.946206 | orchestrator | Wednesday 28 May 2025 19:11:22 +0000 (0:00:00.586) 0:00:07.928 ********* 2025-05-28 19:11:23.144607 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:23.145541 | orchestrator | 2025-05-28 19:11:23.146287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:23.147423 | orchestrator | Wednesday 28 May 2025 19:11:23 +0000 (0:00:00.204) 0:00:08.132 ********* 2025-05-28 19:11:23.365881 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:23.367132 | orchestrator | 2025-05-28 19:11:23.370141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:23.370250 | orchestrator | Wednesday 28 May 2025 19:11:23 +0000 (0:00:00.221) 0:00:08.354 ********* 2025-05-28 19:11:23.595226 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:23.596199 | orchestrator | 2025-05-28 19:11:23.596842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:23.597378 | orchestrator | Wednesday 28 May 2025 19:11:23 +0000 (0:00:00.229) 0:00:08.583 ********* 2025-05-28 19:11:24.284202 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-28 19:11:24.286506 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-28 19:11:24.286574 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-28 19:11:24.287498 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-28 19:11:24.288324 | orchestrator | 2025-05-28 19:11:24.289330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:24.290482 | orchestrator | Wednesday 28 May 2025 19:11:24 +0000 (0:00:00.685) 0:00:09.268 ********* 2025-05-28 19:11:24.488177 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:24.488427 | orchestrator | 2025-05-28 19:11:24.489713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:24.490861 | orchestrator | Wednesday 28 May 2025 19:11:24 +0000 (0:00:00.207) 0:00:09.476 ********* 2025-05-28 19:11:24.702376 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:24.702539 | orchestrator | 2025-05-28 19:11:24.702829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:24.703102 | orchestrator | Wednesday 28 May 2025 19:11:24 +0000 (0:00:00.214) 0:00:09.690 ********* 2025-05-28 19:11:24.907110 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:24.907719 | orchestrator | 2025-05-28 19:11:24.908604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:24.910845 | orchestrator | Wednesday 28 May 2025 19:11:24 +0000 (0:00:00.204) 0:00:09.894 ********* 2025-05-28 19:11:25.100600 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:25.101386 | orchestrator | 2025-05-28 19:11:25.101421 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-28 19:11:25.101824 | orchestrator | Wednesday 28 May 2025 19:11:25 +0000 (0:00:00.194) 0:00:10.089 ********* 2025-05-28 19:11:25.232762 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:25.232891 | orchestrator | 2025-05-28 19:11:25.232907 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-28 19:11:25.233111 | orchestrator | Wednesday 28 May 2025 19:11:25 +0000 (0:00:00.129) 0:00:10.219 ********* 2025-05-28 19:11:25.438106 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}}) 2025-05-28 19:11:25.438239 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '117a45ef-4e6c-5b76-bea4-f0c196d92690'}}) 2025-05-28 19:11:25.438258 | orchestrator | 2025-05-28 19:11:25.438374 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-28 19:11:25.439627 | orchestrator | Wednesday 28 May 2025 19:11:25 +0000 (0:00:00.205) 0:00:10.425 ********* 2025-05-28 19:11:27.613392 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}) 2025-05-28 19:11:27.613508 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'}) 2025-05-28 19:11:27.614452 | orchestrator | 2025-05-28 19:11:27.615065 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-28 19:11:27.615483 | orchestrator | Wednesday 28 May 2025 19:11:27 +0000 (0:00:02.174) 0:00:12.599 ********* 2025-05-28 19:11:27.816385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:27.816495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:27.816572 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:27.816833 | orchestrator | 2025-05-28 19:11:27.817203 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-28 19:11:27.817937 | orchestrator | Wednesday 28 May 2025 19:11:27 +0000 (0:00:00.203) 0:00:12.803 ********* 2025-05-28 19:11:29.324431 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}) 2025-05-28 19:11:29.325755 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'}) 2025-05-28 19:11:29.325850 | orchestrator | 2025-05-28 19:11:29.325867 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-28 19:11:29.325936 | orchestrator | Wednesday 28 May 2025 19:11:29 +0000 (0:00:01.507) 0:00:14.311 ********* 2025-05-28 19:11:29.497508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:29.498932 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:29.501935 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:29.502004 | orchestrator | 2025-05-28 19:11:29.502826 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-28 19:11:29.503496 | orchestrator | Wednesday 28 May 2025 19:11:29 +0000 (0:00:00.174) 0:00:14.485 ********* 2025-05-28 19:11:29.645002 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:29.646725 | orchestrator | 2025-05-28 19:11:29.648325 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-28 19:11:29.648843 | orchestrator | Wednesday 28 May 2025 19:11:29 +0000 (0:00:00.147) 0:00:14.633 ********* 2025-05-28 19:11:29.804315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:29.804468 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:29.805138 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:29.805438 | orchestrator | 2025-05-28 19:11:29.806168 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-28 19:11:29.806631 | orchestrator | Wednesday 28 May 2025 19:11:29 +0000 (0:00:00.157) 0:00:14.790 ********* 2025-05-28 19:11:29.948551 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:29.948965 | orchestrator | 2025-05-28 19:11:29.949369 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-28 19:11:29.950490 | orchestrator | Wednesday 28 May 2025 19:11:29 +0000 (0:00:00.146) 0:00:14.936 ********* 2025-05-28 19:11:30.122818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:30.122917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:30.123015 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:30.123033 | orchestrator | 2025-05-28 19:11:30.123411 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-28 19:11:30.123629 | orchestrator | Wednesday 28 May 2025 19:11:30 +0000 (0:00:00.174) 0:00:15.111 ********* 2025-05-28 19:11:30.427075 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:30.427668 | orchestrator | 2025-05-28 19:11:30.429077 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-28 19:11:30.429514 | orchestrator | Wednesday 28 May 2025 19:11:30 +0000 (0:00:00.303) 0:00:15.415 ********* 2025-05-28 19:11:30.601961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:30.603582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:30.604290 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:30.604955 | orchestrator | 2025-05-28 19:11:30.605595 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-28 19:11:30.606106 | orchestrator | Wednesday 28 May 2025 19:11:30 +0000 (0:00:00.171) 0:00:15.587 ********* 2025-05-28 19:11:30.755603 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:30.756449 | orchestrator | 2025-05-28 19:11:30.758225 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-28 19:11:30.759059 | orchestrator | Wednesday 28 May 2025 19:11:30 +0000 (0:00:00.154) 0:00:15.742 ********* 2025-05-28 19:11:30.943815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:30.944000 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:30.945172 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:30.946675 | orchestrator | 2025-05-28 19:11:30.947595 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-28 19:11:30.948557 | orchestrator | Wednesday 28 May 2025 19:11:30 +0000 (0:00:00.188) 0:00:15.931 ********* 2025-05-28 19:11:31.104602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:31.105668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:31.106646 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:31.108493 | orchestrator | 2025-05-28 19:11:31.109220 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-28 19:11:31.109712 | orchestrator | Wednesday 28 May 2025 19:11:31 +0000 (0:00:00.161) 0:00:16.093 ********* 2025-05-28 19:11:31.263288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:31.263972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:31.265062 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:31.266714 | orchestrator | 2025-05-28 19:11:31.267816 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-28 19:11:31.268818 | orchestrator | Wednesday 28 May 2025 19:11:31 +0000 (0:00:00.158) 0:00:16.251 ********* 2025-05-28 19:11:31.415515 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:31.415603 | orchestrator | 2025-05-28 19:11:31.416965 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-28 19:11:31.417379 | orchestrator | Wednesday 28 May 2025 19:11:31 +0000 (0:00:00.151) 0:00:16.403 ********* 2025-05-28 19:11:31.557490 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:31.557695 | orchestrator | 2025-05-28 19:11:31.557914 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-28 19:11:31.558360 | orchestrator | Wednesday 28 May 2025 19:11:31 +0000 (0:00:00.143) 0:00:16.546 ********* 2025-05-28 19:11:31.725754 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:31.727112 | orchestrator | 2025-05-28 19:11:31.728282 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-28 19:11:31.729188 | orchestrator | Wednesday 28 May 2025 19:11:31 +0000 (0:00:00.168) 0:00:16.714 ********* 2025-05-28 19:11:31.884529 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 19:11:31.885503 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-28 19:11:31.886809 | orchestrator | } 2025-05-28 19:11:31.887312 | orchestrator | 2025-05-28 19:11:31.888110 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-28 19:11:31.889120 | orchestrator | Wednesday 28 May 2025 19:11:31 +0000 (0:00:00.158) 0:00:16.873 ********* 2025-05-28 19:11:32.028810 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 19:11:32.028926 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-28 19:11:32.029236 | orchestrator | } 2025-05-28 19:11:32.030415 | orchestrator | 2025-05-28 19:11:32.031944 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-28 19:11:32.032379 | orchestrator | Wednesday 28 May 2025 19:11:32 +0000 (0:00:00.144) 0:00:17.017 ********* 2025-05-28 19:11:32.188263 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 19:11:32.188394 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-28 19:11:32.189042 | orchestrator | } 2025-05-28 19:11:32.190156 | orchestrator | 2025-05-28 19:11:32.190674 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-28 19:11:32.193172 | orchestrator | Wednesday 28 May 2025 19:11:32 +0000 (0:00:00.159) 0:00:17.177 ********* 2025-05-28 19:11:33.117776 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:33.119161 | orchestrator | 2025-05-28 19:11:33.120871 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-28 19:11:33.121290 | orchestrator | Wednesday 28 May 2025 19:11:33 +0000 (0:00:00.929) 0:00:18.106 ********* 2025-05-28 19:11:33.640979 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:33.641331 | orchestrator | 2025-05-28 19:11:33.641936 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-28 19:11:33.642536 | orchestrator | Wednesday 28 May 2025 19:11:33 +0000 (0:00:00.523) 0:00:18.629 ********* 2025-05-28 19:11:34.172669 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:34.172914 | orchestrator | 2025-05-28 19:11:34.176678 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-28 19:11:34.177081 | orchestrator | Wednesday 28 May 2025 19:11:34 +0000 (0:00:00.529) 0:00:19.159 ********* 2025-05-28 19:11:34.320943 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:34.321074 | orchestrator | 2025-05-28 19:11:34.322144 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-28 19:11:34.323261 | orchestrator | Wednesday 28 May 2025 19:11:34 +0000 (0:00:00.148) 0:00:19.308 ********* 2025-05-28 19:11:34.438452 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:34.438936 | orchestrator | 2025-05-28 19:11:34.440113 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-28 19:11:34.440700 | orchestrator | Wednesday 28 May 2025 19:11:34 +0000 (0:00:00.117) 0:00:19.426 ********* 2025-05-28 19:11:34.555403 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:34.556061 | orchestrator | 2025-05-28 19:11:34.557737 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-28 19:11:34.558102 | orchestrator | Wednesday 28 May 2025 19:11:34 +0000 (0:00:00.117) 0:00:19.543 ********* 2025-05-28 19:11:34.691666 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 19:11:34.692279 | orchestrator |  "vgs_report": { 2025-05-28 19:11:34.693119 | orchestrator |  "vg": [] 2025-05-28 19:11:34.695727 | orchestrator |  } 2025-05-28 19:11:34.696630 | orchestrator | } 2025-05-28 19:11:34.697077 | orchestrator | 2025-05-28 19:11:34.697691 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-28 19:11:34.698528 | orchestrator | Wednesday 28 May 2025 19:11:34 +0000 (0:00:00.135) 0:00:19.679 ********* 2025-05-28 19:11:34.837017 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:34.837499 | orchestrator | 2025-05-28 19:11:34.840128 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-28 19:11:34.840172 | orchestrator | Wednesday 28 May 2025 19:11:34 +0000 (0:00:00.144) 0:00:19.823 ********* 2025-05-28 19:11:34.971476 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:34.973251 | orchestrator | 2025-05-28 19:11:34.974523 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-28 19:11:34.975969 | orchestrator | Wednesday 28 May 2025 19:11:34 +0000 (0:00:00.135) 0:00:19.959 ********* 2025-05-28 19:11:35.119747 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:35.119889 | orchestrator | 2025-05-28 19:11:35.119906 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-28 19:11:35.119919 | orchestrator | Wednesday 28 May 2025 19:11:35 +0000 (0:00:00.146) 0:00:20.105 ********* 2025-05-28 19:11:35.243695 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:35.244600 | orchestrator | 2025-05-28 19:11:35.244911 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-28 19:11:35.245882 | orchestrator | Wednesday 28 May 2025 19:11:35 +0000 (0:00:00.127) 0:00:20.232 ********* 2025-05-28 19:11:35.588025 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:35.589707 | orchestrator | 2025-05-28 19:11:35.592348 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-28 19:11:35.595264 | orchestrator | Wednesday 28 May 2025 19:11:35 +0000 (0:00:00.343) 0:00:20.576 ********* 2025-05-28 19:11:35.731080 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:35.731385 | orchestrator | 2025-05-28 19:11:35.733037 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-28 19:11:35.733234 | orchestrator | Wednesday 28 May 2025 19:11:35 +0000 (0:00:00.143) 0:00:20.719 ********* 2025-05-28 19:11:35.865864 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:35.865966 | orchestrator | 2025-05-28 19:11:35.868221 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-28 19:11:35.868999 | orchestrator | Wednesday 28 May 2025 19:11:35 +0000 (0:00:00.132) 0:00:20.851 ********* 2025-05-28 19:11:35.996407 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:35.996511 | orchestrator | 2025-05-28 19:11:35.997237 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-28 19:11:35.998374 | orchestrator | Wednesday 28 May 2025 19:11:35 +0000 (0:00:00.133) 0:00:20.985 ********* 2025-05-28 19:11:36.147924 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:36.148096 | orchestrator | 2025-05-28 19:11:36.148559 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-28 19:11:36.149621 | orchestrator | Wednesday 28 May 2025 19:11:36 +0000 (0:00:00.152) 0:00:21.137 ********* 2025-05-28 19:11:36.294247 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:36.294413 | orchestrator | 2025-05-28 19:11:36.295548 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-28 19:11:36.296973 | orchestrator | Wednesday 28 May 2025 19:11:36 +0000 (0:00:00.144) 0:00:21.281 ********* 2025-05-28 19:11:36.424992 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:36.425801 | orchestrator | 2025-05-28 19:11:36.426735 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-28 19:11:36.428345 | orchestrator | Wednesday 28 May 2025 19:11:36 +0000 (0:00:00.130) 0:00:21.411 ********* 2025-05-28 19:11:36.557038 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:36.558110 | orchestrator | 2025-05-28 19:11:36.559315 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-28 19:11:36.560556 | orchestrator | Wednesday 28 May 2025 19:11:36 +0000 (0:00:00.133) 0:00:21.545 ********* 2025-05-28 19:11:36.725835 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:36.727010 | orchestrator | 2025-05-28 19:11:36.728274 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-28 19:11:36.729719 | orchestrator | Wednesday 28 May 2025 19:11:36 +0000 (0:00:00.167) 0:00:21.712 ********* 2025-05-28 19:11:36.880667 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:36.881421 | orchestrator | 2025-05-28 19:11:36.882281 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-28 19:11:36.884848 | orchestrator | Wednesday 28 May 2025 19:11:36 +0000 (0:00:00.155) 0:00:21.867 ********* 2025-05-28 19:11:37.074141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:37.074809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:37.075002 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:37.075650 | orchestrator | 2025-05-28 19:11:37.078181 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-28 19:11:37.078219 | orchestrator | Wednesday 28 May 2025 19:11:37 +0000 (0:00:00.194) 0:00:22.061 ********* 2025-05-28 19:11:37.240833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:37.241849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:37.242342 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:37.243119 | orchestrator | 2025-05-28 19:11:37.243930 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-28 19:11:37.244933 | orchestrator | Wednesday 28 May 2025 19:11:37 +0000 (0:00:00.167) 0:00:22.229 ********* 2025-05-28 19:11:37.621279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:37.621464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:37.622170 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:37.622441 | orchestrator | 2025-05-28 19:11:37.622847 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-28 19:11:37.623180 | orchestrator | Wednesday 28 May 2025 19:11:37 +0000 (0:00:00.380) 0:00:22.610 ********* 2025-05-28 19:11:37.788235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:37.788368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:37.789245 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:37.789901 | orchestrator | 2025-05-28 19:11:37.790158 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-28 19:11:37.792555 | orchestrator | Wednesday 28 May 2025 19:11:37 +0000 (0:00:00.166) 0:00:22.777 ********* 2025-05-28 19:11:37.969280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:37.970392 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:37.972877 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:37.972901 | orchestrator | 2025-05-28 19:11:37.972914 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-28 19:11:37.973358 | orchestrator | Wednesday 28 May 2025 19:11:37 +0000 (0:00:00.179) 0:00:22.956 ********* 2025-05-28 19:11:38.148609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:38.149034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:38.149061 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:38.149240 | orchestrator | 2025-05-28 19:11:38.150116 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-28 19:11:38.150612 | orchestrator | Wednesday 28 May 2025 19:11:38 +0000 (0:00:00.179) 0:00:23.136 ********* 2025-05-28 19:11:38.314826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:38.314910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:38.315694 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:38.316847 | orchestrator | 2025-05-28 19:11:38.317701 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-28 19:11:38.318392 | orchestrator | Wednesday 28 May 2025 19:11:38 +0000 (0:00:00.165) 0:00:23.302 ********* 2025-05-28 19:11:38.482766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:38.482966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:38.483965 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:38.485113 | orchestrator | 2025-05-28 19:11:38.485678 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-28 19:11:38.486551 | orchestrator | Wednesday 28 May 2025 19:11:38 +0000 (0:00:00.168) 0:00:23.470 ********* 2025-05-28 19:11:39.035130 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:39.037263 | orchestrator | 2025-05-28 19:11:39.037295 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-28 19:11:39.037326 | orchestrator | Wednesday 28 May 2025 19:11:39 +0000 (0:00:00.550) 0:00:24.021 ********* 2025-05-28 19:11:39.573572 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:39.574239 | orchestrator | 2025-05-28 19:11:39.574889 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-28 19:11:39.575399 | orchestrator | Wednesday 28 May 2025 19:11:39 +0000 (0:00:00.540) 0:00:24.562 ********* 2025-05-28 19:11:39.733445 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:11:39.734092 | orchestrator | 2025-05-28 19:11:39.736868 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-28 19:11:39.736893 | orchestrator | Wednesday 28 May 2025 19:11:39 +0000 (0:00:00.158) 0:00:24.720 ********* 2025-05-28 19:11:39.916992 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'vg_name': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'}) 2025-05-28 19:11:39.917076 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'vg_name': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}) 2025-05-28 19:11:39.918280 | orchestrator | 2025-05-28 19:11:39.918305 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-28 19:11:39.918318 | orchestrator | Wednesday 28 May 2025 19:11:39 +0000 (0:00:00.184) 0:00:24.905 ********* 2025-05-28 19:11:40.312315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:40.313335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:40.315080 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:40.316681 | orchestrator | 2025-05-28 19:11:40.318085 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-28 19:11:40.318901 | orchestrator | Wednesday 28 May 2025 19:11:40 +0000 (0:00:00.393) 0:00:25.299 ********* 2025-05-28 19:11:40.489403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:40.489506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:40.490114 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:40.490657 | orchestrator | 2025-05-28 19:11:40.491722 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-28 19:11:40.492355 | orchestrator | Wednesday 28 May 2025 19:11:40 +0000 (0:00:00.177) 0:00:25.476 ********* 2025-05-28 19:11:40.666337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'})  2025-05-28 19:11:40.666958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'})  2025-05-28 19:11:40.668421 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:11:40.668454 | orchestrator | 2025-05-28 19:11:40.669553 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-28 19:11:40.669944 | orchestrator | Wednesday 28 May 2025 19:11:40 +0000 (0:00:00.177) 0:00:25.654 ********* 2025-05-28 19:11:41.397546 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 19:11:41.397660 | orchestrator |  "lvm_report": { 2025-05-28 19:11:41.397676 | orchestrator |  "lv": [ 2025-05-28 19:11:41.399548 | orchestrator |  { 2025-05-28 19:11:41.399727 | orchestrator |  "lv_name": "osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690", 2025-05-28 19:11:41.401176 | orchestrator |  "vg_name": "ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690" 2025-05-28 19:11:41.401336 | orchestrator |  }, 2025-05-28 19:11:41.402675 | orchestrator |  { 2025-05-28 19:11:41.403187 | orchestrator |  "lv_name": "osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3", 2025-05-28 19:11:41.403573 | orchestrator |  "vg_name": "ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3" 2025-05-28 19:11:41.403945 | orchestrator |  } 2025-05-28 19:11:41.404527 | orchestrator |  ], 2025-05-28 19:11:41.405108 | orchestrator |  "pv": [ 2025-05-28 19:11:41.405318 | orchestrator |  { 2025-05-28 19:11:41.406102 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-28 19:11:41.406331 | orchestrator |  "vg_name": "ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3" 2025-05-28 19:11:41.406539 | orchestrator |  }, 2025-05-28 19:11:41.407019 | orchestrator |  { 2025-05-28 19:11:41.407329 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-28 19:11:41.407916 | orchestrator |  "vg_name": "ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690" 2025-05-28 19:11:41.408272 | orchestrator |  } 2025-05-28 19:11:41.408491 | orchestrator |  ] 2025-05-28 19:11:41.409457 | orchestrator |  } 2025-05-28 19:11:41.410393 | orchestrator | } 2025-05-28 19:11:41.410708 | orchestrator | 2025-05-28 19:11:41.411236 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-28 19:11:41.411739 | orchestrator | 2025-05-28 19:11:41.412191 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 19:11:41.412519 | orchestrator | Wednesday 28 May 2025 19:11:41 +0000 (0:00:00.729) 0:00:26.383 ********* 2025-05-28 19:11:42.028425 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-28 19:11:42.029048 | orchestrator | 2025-05-28 19:11:42.030063 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 19:11:42.032854 | orchestrator | Wednesday 28 May 2025 19:11:42 +0000 (0:00:00.632) 0:00:27.016 ********* 2025-05-28 19:11:42.298611 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:11:42.298745 | orchestrator | 2025-05-28 19:11:42.298872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:42.300304 | orchestrator | Wednesday 28 May 2025 19:11:42 +0000 (0:00:00.270) 0:00:27.286 ********* 2025-05-28 19:11:42.799689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-28 19:11:42.799888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-28 19:11:42.800115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-28 19:11:42.801127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-28 19:11:42.801536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-28 19:11:42.802170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-28 19:11:42.803012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-28 19:11:42.804076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-28 19:11:42.804278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-28 19:11:42.805543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-28 19:11:42.805572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-28 19:11:42.805994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-28 19:11:42.806650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-28 19:11:42.807116 | orchestrator | 2025-05-28 19:11:42.807917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:42.808295 | orchestrator | Wednesday 28 May 2025 19:11:42 +0000 (0:00:00.500) 0:00:27.787 ********* 2025-05-28 19:11:43.021371 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:43.023069 | orchestrator | 2025-05-28 19:11:43.023100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:43.024234 | orchestrator | Wednesday 28 May 2025 19:11:43 +0000 (0:00:00.218) 0:00:28.006 ********* 2025-05-28 19:11:43.237858 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:43.238099 | orchestrator | 2025-05-28 19:11:43.238706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:43.242169 | orchestrator | Wednesday 28 May 2025 19:11:43 +0000 (0:00:00.219) 0:00:28.226 ********* 2025-05-28 19:11:43.447717 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:43.447903 | orchestrator | 2025-05-28 19:11:43.447920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:43.448022 | orchestrator | Wednesday 28 May 2025 19:11:43 +0000 (0:00:00.210) 0:00:28.436 ********* 2025-05-28 19:11:43.637422 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:43.637647 | orchestrator | 2025-05-28 19:11:43.641340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:43.641404 | orchestrator | Wednesday 28 May 2025 19:11:43 +0000 (0:00:00.188) 0:00:28.625 ********* 2025-05-28 19:11:43.847082 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:43.849293 | orchestrator | 2025-05-28 19:11:43.849326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:43.851808 | orchestrator | Wednesday 28 May 2025 19:11:43 +0000 (0:00:00.209) 0:00:28.834 ********* 2025-05-28 19:11:44.057963 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:44.058472 | orchestrator | 2025-05-28 19:11:44.059299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:44.062050 | orchestrator | Wednesday 28 May 2025 19:11:44 +0000 (0:00:00.210) 0:00:29.045 ********* 2025-05-28 19:11:44.265553 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:44.265662 | orchestrator | 2025-05-28 19:11:44.265677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:44.269198 | orchestrator | Wednesday 28 May 2025 19:11:44 +0000 (0:00:00.206) 0:00:29.251 ********* 2025-05-28 19:11:44.798651 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:44.799910 | orchestrator | 2025-05-28 19:11:44.800170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:44.801192 | orchestrator | Wednesday 28 May 2025 19:11:44 +0000 (0:00:00.533) 0:00:29.785 ********* 2025-05-28 19:11:45.246129 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3) 2025-05-28 19:11:45.248401 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3) 2025-05-28 19:11:45.248465 | orchestrator | 2025-05-28 19:11:45.248481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:45.248537 | orchestrator | Wednesday 28 May 2025 19:11:45 +0000 (0:00:00.447) 0:00:30.232 ********* 2025-05-28 19:11:45.717206 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836) 2025-05-28 19:11:45.718483 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836) 2025-05-28 19:11:45.718665 | orchestrator | 2025-05-28 19:11:45.719303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:45.720354 | orchestrator | Wednesday 28 May 2025 19:11:45 +0000 (0:00:00.473) 0:00:30.706 ********* 2025-05-28 19:11:46.125575 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445) 2025-05-28 19:11:46.125687 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445) 2025-05-28 19:11:46.126109 | orchestrator | 2025-05-28 19:11:46.126339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:46.126650 | orchestrator | Wednesday 28 May 2025 19:11:46 +0000 (0:00:00.408) 0:00:31.114 ********* 2025-05-28 19:11:46.557467 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd) 2025-05-28 19:11:46.557652 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd) 2025-05-28 19:11:46.559331 | orchestrator | 2025-05-28 19:11:46.562547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:11:46.563847 | orchestrator | Wednesday 28 May 2025 19:11:46 +0000 (0:00:00.432) 0:00:31.546 ********* 2025-05-28 19:11:46.884158 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 19:11:46.884755 | orchestrator | 2025-05-28 19:11:46.885249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:46.888804 | orchestrator | Wednesday 28 May 2025 19:11:46 +0000 (0:00:00.326) 0:00:31.873 ********* 2025-05-28 19:11:47.363241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-28 19:11:47.363352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-28 19:11:47.364134 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-28 19:11:47.365019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-28 19:11:47.366999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-28 19:11:47.367485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-28 19:11:47.368389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-28 19:11:47.369910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-28 19:11:47.371497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-28 19:11:47.372434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-28 19:11:47.373267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-28 19:11:47.374121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-28 19:11:47.375421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-28 19:11:47.376104 | orchestrator | 2025-05-28 19:11:47.376860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:47.377916 | orchestrator | Wednesday 28 May 2025 19:11:47 +0000 (0:00:00.476) 0:00:32.349 ********* 2025-05-28 19:11:47.548649 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:47.548827 | orchestrator | 2025-05-28 19:11:47.548980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:47.549002 | orchestrator | Wednesday 28 May 2025 19:11:47 +0000 (0:00:00.188) 0:00:32.538 ********* 2025-05-28 19:11:47.741616 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:47.741718 | orchestrator | 2025-05-28 19:11:47.741730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:47.741738 | orchestrator | Wednesday 28 May 2025 19:11:47 +0000 (0:00:00.188) 0:00:32.727 ********* 2025-05-28 19:11:48.203401 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:48.207474 | orchestrator | 2025-05-28 19:11:48.207538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:48.207552 | orchestrator | Wednesday 28 May 2025 19:11:48 +0000 (0:00:00.463) 0:00:33.190 ********* 2025-05-28 19:11:48.429602 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:48.429714 | orchestrator | 2025-05-28 19:11:48.430113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:48.430528 | orchestrator | Wednesday 28 May 2025 19:11:48 +0000 (0:00:00.226) 0:00:33.417 ********* 2025-05-28 19:11:48.671224 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:48.671329 | orchestrator | 2025-05-28 19:11:48.672558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:48.672632 | orchestrator | Wednesday 28 May 2025 19:11:48 +0000 (0:00:00.242) 0:00:33.660 ********* 2025-05-28 19:11:48.894682 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:48.894853 | orchestrator | 2025-05-28 19:11:48.896136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:48.896161 | orchestrator | Wednesday 28 May 2025 19:11:48 +0000 (0:00:00.221) 0:00:33.882 ********* 2025-05-28 19:11:49.132312 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:49.132762 | orchestrator | 2025-05-28 19:11:49.133563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:49.133668 | orchestrator | Wednesday 28 May 2025 19:11:49 +0000 (0:00:00.237) 0:00:34.120 ********* 2025-05-28 19:11:49.359349 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:49.360370 | orchestrator | 2025-05-28 19:11:49.361273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:49.364191 | orchestrator | Wednesday 28 May 2025 19:11:49 +0000 (0:00:00.226) 0:00:34.346 ********* 2025-05-28 19:11:50.023406 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-28 19:11:50.023595 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-28 19:11:50.024421 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-28 19:11:50.025994 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-28 19:11:50.026352 | orchestrator | 2025-05-28 19:11:50.027215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:50.027400 | orchestrator | Wednesday 28 May 2025 19:11:50 +0000 (0:00:00.664) 0:00:35.011 ********* 2025-05-28 19:11:50.217096 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:50.217260 | orchestrator | 2025-05-28 19:11:50.217349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:50.217866 | orchestrator | Wednesday 28 May 2025 19:11:50 +0000 (0:00:00.193) 0:00:35.204 ********* 2025-05-28 19:11:50.431199 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:50.431305 | orchestrator | 2025-05-28 19:11:50.434737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:50.435767 | orchestrator | Wednesday 28 May 2025 19:11:50 +0000 (0:00:00.211) 0:00:35.416 ********* 2025-05-28 19:11:50.645457 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:50.654956 | orchestrator | 2025-05-28 19:11:50.659238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:11:50.662138 | orchestrator | Wednesday 28 May 2025 19:11:50 +0000 (0:00:00.217) 0:00:35.633 ********* 2025-05-28 19:11:51.380967 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:51.381685 | orchestrator | 2025-05-28 19:11:51.383047 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-28 19:11:51.383691 | orchestrator | Wednesday 28 May 2025 19:11:51 +0000 (0:00:00.733) 0:00:36.367 ********* 2025-05-28 19:11:51.523448 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:51.524166 | orchestrator | 2025-05-28 19:11:51.524991 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-28 19:11:51.531194 | orchestrator | Wednesday 28 May 2025 19:11:51 +0000 (0:00:00.143) 0:00:36.510 ********* 2025-05-28 19:11:51.739063 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3ed7399e-dc97-5c28-9f68-879666a39403'}}) 2025-05-28 19:11:51.739831 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0344b063-3cec-5ade-bfbf-9241287811af'}}) 2025-05-28 19:11:51.740649 | orchestrator | 2025-05-28 19:11:51.747255 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-28 19:11:51.747331 | orchestrator | Wednesday 28 May 2025 19:11:51 +0000 (0:00:00.216) 0:00:36.727 ********* 2025-05-28 19:11:53.499257 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'}) 2025-05-28 19:11:53.499695 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'}) 2025-05-28 19:11:53.500264 | orchestrator | 2025-05-28 19:11:53.501417 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-28 19:11:53.502137 | orchestrator | Wednesday 28 May 2025 19:11:53 +0000 (0:00:01.758) 0:00:38.485 ********* 2025-05-28 19:11:53.679772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:53.679995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:53.681392 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:53.682486 | orchestrator | 2025-05-28 19:11:53.683877 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-28 19:11:53.687377 | orchestrator | Wednesday 28 May 2025 19:11:53 +0000 (0:00:00.182) 0:00:38.668 ********* 2025-05-28 19:11:54.878459 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'}) 2025-05-28 19:11:54.879366 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'}) 2025-05-28 19:11:54.880024 | orchestrator | 2025-05-28 19:11:54.880627 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-28 19:11:54.881082 | orchestrator | Wednesday 28 May 2025 19:11:54 +0000 (0:00:01.198) 0:00:39.866 ********* 2025-05-28 19:11:55.094121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:55.099110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:55.099155 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:55.099437 | orchestrator | 2025-05-28 19:11:55.100151 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-28 19:11:55.101096 | orchestrator | Wednesday 28 May 2025 19:11:55 +0000 (0:00:00.212) 0:00:40.079 ********* 2025-05-28 19:11:55.241968 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:55.242451 | orchestrator | 2025-05-28 19:11:55.242948 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-28 19:11:55.244519 | orchestrator | Wednesday 28 May 2025 19:11:55 +0000 (0:00:00.148) 0:00:40.227 ********* 2025-05-28 19:11:55.402400 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:55.403339 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:55.405225 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:55.405982 | orchestrator | 2025-05-28 19:11:55.406571 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-28 19:11:55.407410 | orchestrator | Wednesday 28 May 2025 19:11:55 +0000 (0:00:00.163) 0:00:40.390 ********* 2025-05-28 19:11:55.724380 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:55.725848 | orchestrator | 2025-05-28 19:11:55.726860 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-28 19:11:55.729208 | orchestrator | Wednesday 28 May 2025 19:11:55 +0000 (0:00:00.322) 0:00:40.713 ********* 2025-05-28 19:11:55.895151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:55.896547 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:55.898013 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:55.898640 | orchestrator | 2025-05-28 19:11:55.899057 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-28 19:11:55.899563 | orchestrator | Wednesday 28 May 2025 19:11:55 +0000 (0:00:00.168) 0:00:40.881 ********* 2025-05-28 19:11:56.040106 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:56.040319 | orchestrator | 2025-05-28 19:11:56.041446 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-28 19:11:56.042250 | orchestrator | Wednesday 28 May 2025 19:11:56 +0000 (0:00:00.146) 0:00:41.028 ********* 2025-05-28 19:11:56.216358 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:56.219555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:56.220880 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:56.221694 | orchestrator | 2025-05-28 19:11:56.222569 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-28 19:11:56.223242 | orchestrator | Wednesday 28 May 2025 19:11:56 +0000 (0:00:00.175) 0:00:41.204 ********* 2025-05-28 19:11:56.369705 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:11:56.371358 | orchestrator | 2025-05-28 19:11:56.372172 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-28 19:11:56.373057 | orchestrator | Wednesday 28 May 2025 19:11:56 +0000 (0:00:00.153) 0:00:41.358 ********* 2025-05-28 19:11:56.542405 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:56.543237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:56.544524 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:56.545388 | orchestrator | 2025-05-28 19:11:56.546165 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-28 19:11:56.546769 | orchestrator | Wednesday 28 May 2025 19:11:56 +0000 (0:00:00.172) 0:00:41.531 ********* 2025-05-28 19:11:56.728888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:56.729015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:56.729070 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:56.729191 | orchestrator | 2025-05-28 19:11:56.730219 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-28 19:11:56.731260 | orchestrator | Wednesday 28 May 2025 19:11:56 +0000 (0:00:00.185) 0:00:41.716 ********* 2025-05-28 19:11:56.903366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:11:56.903472 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:11:56.903572 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:56.904066 | orchestrator | 2025-05-28 19:11:56.904358 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-28 19:11:56.905230 | orchestrator | Wednesday 28 May 2025 19:11:56 +0000 (0:00:00.174) 0:00:41.890 ********* 2025-05-28 19:11:57.049245 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:57.049835 | orchestrator | 2025-05-28 19:11:57.053241 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-28 19:11:57.053468 | orchestrator | Wednesday 28 May 2025 19:11:57 +0000 (0:00:00.146) 0:00:42.037 ********* 2025-05-28 19:11:57.195507 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:57.200239 | orchestrator | 2025-05-28 19:11:57.201985 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-28 19:11:57.202391 | orchestrator | Wednesday 28 May 2025 19:11:57 +0000 (0:00:00.145) 0:00:42.183 ********* 2025-05-28 19:11:57.364622 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:57.366149 | orchestrator | 2025-05-28 19:11:57.366181 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-28 19:11:57.366195 | orchestrator | Wednesday 28 May 2025 19:11:57 +0000 (0:00:00.167) 0:00:42.351 ********* 2025-05-28 19:11:57.512660 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 19:11:57.513421 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-28 19:11:57.514675 | orchestrator | } 2025-05-28 19:11:57.515019 | orchestrator | 2025-05-28 19:11:57.516146 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-28 19:11:57.516997 | orchestrator | Wednesday 28 May 2025 19:11:57 +0000 (0:00:00.150) 0:00:42.501 ********* 2025-05-28 19:11:57.883769 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 19:11:57.884022 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-28 19:11:57.884302 | orchestrator | } 2025-05-28 19:11:57.887841 | orchestrator | 2025-05-28 19:11:57.890145 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-28 19:11:57.890188 | orchestrator | Wednesday 28 May 2025 19:11:57 +0000 (0:00:00.371) 0:00:42.873 ********* 2025-05-28 19:11:58.025877 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 19:11:58.025999 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-28 19:11:58.026091 | orchestrator | } 2025-05-28 19:11:58.026114 | orchestrator | 2025-05-28 19:11:58.027397 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-28 19:11:58.027438 | orchestrator | Wednesday 28 May 2025 19:11:58 +0000 (0:00:00.139) 0:00:43.013 ********* 2025-05-28 19:11:58.496026 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:11:58.498124 | orchestrator | 2025-05-28 19:11:58.498208 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-28 19:11:58.499179 | orchestrator | Wednesday 28 May 2025 19:11:58 +0000 (0:00:00.469) 0:00:43.483 ********* 2025-05-28 19:11:59.003450 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:11:59.004101 | orchestrator | 2025-05-28 19:11:59.005143 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-28 19:11:59.006340 | orchestrator | Wednesday 28 May 2025 19:11:58 +0000 (0:00:00.508) 0:00:43.991 ********* 2025-05-28 19:11:59.564088 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:11:59.564200 | orchestrator | 2025-05-28 19:11:59.565336 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-28 19:11:59.567455 | orchestrator | Wednesday 28 May 2025 19:11:59 +0000 (0:00:00.557) 0:00:44.549 ********* 2025-05-28 19:11:59.723504 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:11:59.723928 | orchestrator | 2025-05-28 19:11:59.725070 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-28 19:11:59.726154 | orchestrator | Wednesday 28 May 2025 19:11:59 +0000 (0:00:00.160) 0:00:44.710 ********* 2025-05-28 19:11:59.831968 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:59.832594 | orchestrator | 2025-05-28 19:11:59.833928 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-28 19:11:59.835305 | orchestrator | Wednesday 28 May 2025 19:11:59 +0000 (0:00:00.110) 0:00:44.820 ********* 2025-05-28 19:11:59.963185 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:11:59.964285 | orchestrator | 2025-05-28 19:11:59.965752 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-28 19:11:59.966946 | orchestrator | Wednesday 28 May 2025 19:11:59 +0000 (0:00:00.131) 0:00:44.951 ********* 2025-05-28 19:12:00.112473 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 19:12:00.112978 | orchestrator |  "vgs_report": { 2025-05-28 19:12:00.113910 | orchestrator |  "vg": [] 2025-05-28 19:12:00.114964 | orchestrator |  } 2025-05-28 19:12:00.115537 | orchestrator | } 2025-05-28 19:12:00.116557 | orchestrator | 2025-05-28 19:12:00.117213 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-28 19:12:00.117942 | orchestrator | Wednesday 28 May 2025 19:12:00 +0000 (0:00:00.149) 0:00:45.101 ********* 2025-05-28 19:12:00.288105 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:00.288709 | orchestrator | 2025-05-28 19:12:00.288739 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-28 19:12:00.289342 | orchestrator | Wednesday 28 May 2025 19:12:00 +0000 (0:00:00.174) 0:00:45.275 ********* 2025-05-28 19:12:00.424268 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:00.425287 | orchestrator | 2025-05-28 19:12:00.425619 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-28 19:12:00.426699 | orchestrator | Wednesday 28 May 2025 19:12:00 +0000 (0:00:00.136) 0:00:45.412 ********* 2025-05-28 19:12:00.763757 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:00.764422 | orchestrator | 2025-05-28 19:12:00.765671 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-28 19:12:00.767028 | orchestrator | Wednesday 28 May 2025 19:12:00 +0000 (0:00:00.339) 0:00:45.751 ********* 2025-05-28 19:12:00.910965 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:00.911741 | orchestrator | 2025-05-28 19:12:00.913630 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-28 19:12:00.915196 | orchestrator | Wednesday 28 May 2025 19:12:00 +0000 (0:00:00.143) 0:00:45.895 ********* 2025-05-28 19:12:01.050983 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:01.051330 | orchestrator | 2025-05-28 19:12:01.051435 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-28 19:12:01.051523 | orchestrator | Wednesday 28 May 2025 19:12:01 +0000 (0:00:00.144) 0:00:46.039 ********* 2025-05-28 19:12:01.193401 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:01.195242 | orchestrator | 2025-05-28 19:12:01.196165 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-28 19:12:01.196982 | orchestrator | Wednesday 28 May 2025 19:12:01 +0000 (0:00:00.141) 0:00:46.181 ********* 2025-05-28 19:12:01.332910 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:01.333246 | orchestrator | 2025-05-28 19:12:01.334182 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-28 19:12:01.335255 | orchestrator | Wednesday 28 May 2025 19:12:01 +0000 (0:00:00.138) 0:00:46.320 ********* 2025-05-28 19:12:01.474994 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:01.475800 | orchestrator | 2025-05-28 19:12:01.477298 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-28 19:12:01.478321 | orchestrator | Wednesday 28 May 2025 19:12:01 +0000 (0:00:00.143) 0:00:46.463 ********* 2025-05-28 19:12:01.618489 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:01.619329 | orchestrator | 2025-05-28 19:12:01.620336 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-28 19:12:01.621167 | orchestrator | Wednesday 28 May 2025 19:12:01 +0000 (0:00:00.142) 0:00:46.606 ********* 2025-05-28 19:12:01.760119 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:01.760681 | orchestrator | 2025-05-28 19:12:01.761212 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-28 19:12:01.762293 | orchestrator | Wednesday 28 May 2025 19:12:01 +0000 (0:00:00.142) 0:00:46.748 ********* 2025-05-28 19:12:01.902484 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:01.902934 | orchestrator | 2025-05-28 19:12:01.904354 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-28 19:12:01.905723 | orchestrator | Wednesday 28 May 2025 19:12:01 +0000 (0:00:00.142) 0:00:46.891 ********* 2025-05-28 19:12:02.042527 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:02.042731 | orchestrator | 2025-05-28 19:12:02.043072 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-28 19:12:02.043915 | orchestrator | Wednesday 28 May 2025 19:12:02 +0000 (0:00:00.139) 0:00:47.031 ********* 2025-05-28 19:12:02.204488 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:02.204583 | orchestrator | 2025-05-28 19:12:02.205560 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-28 19:12:02.205584 | orchestrator | Wednesday 28 May 2025 19:12:02 +0000 (0:00:00.161) 0:00:47.193 ********* 2025-05-28 19:12:02.344137 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:02.345547 | orchestrator | 2025-05-28 19:12:02.346588 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-28 19:12:02.347382 | orchestrator | Wednesday 28 May 2025 19:12:02 +0000 (0:00:00.139) 0:00:47.333 ********* 2025-05-28 19:12:02.729160 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:02.729818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:02.731480 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:02.731957 | orchestrator | 2025-05-28 19:12:02.732821 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-28 19:12:02.735748 | orchestrator | Wednesday 28 May 2025 19:12:02 +0000 (0:00:00.385) 0:00:47.718 ********* 2025-05-28 19:12:02.917995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:02.918246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:02.919485 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:02.920158 | orchestrator | 2025-05-28 19:12:02.920996 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-28 19:12:02.921358 | orchestrator | Wednesday 28 May 2025 19:12:02 +0000 (0:00:00.186) 0:00:47.905 ********* 2025-05-28 19:12:03.093673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:03.093831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:03.094145 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:03.094542 | orchestrator | 2025-05-28 19:12:03.094971 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-28 19:12:03.095471 | orchestrator | Wednesday 28 May 2025 19:12:03 +0000 (0:00:00.176) 0:00:48.081 ********* 2025-05-28 19:12:03.269377 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:03.270733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:03.272318 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:03.273689 | orchestrator | 2025-05-28 19:12:03.274477 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-28 19:12:03.274864 | orchestrator | Wednesday 28 May 2025 19:12:03 +0000 (0:00:00.175) 0:00:48.256 ********* 2025-05-28 19:12:03.455676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:03.457837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:03.458354 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:03.459144 | orchestrator | 2025-05-28 19:12:03.460252 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-28 19:12:03.460402 | orchestrator | Wednesday 28 May 2025 19:12:03 +0000 (0:00:00.184) 0:00:48.441 ********* 2025-05-28 19:12:03.638602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:03.638908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:03.640956 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:03.641914 | orchestrator | 2025-05-28 19:12:03.643150 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-28 19:12:03.643967 | orchestrator | Wednesday 28 May 2025 19:12:03 +0000 (0:00:00.184) 0:00:48.626 ********* 2025-05-28 19:12:03.840868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:03.841474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:03.843207 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:03.844361 | orchestrator | 2025-05-28 19:12:03.845323 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-28 19:12:03.846133 | orchestrator | Wednesday 28 May 2025 19:12:03 +0000 (0:00:00.202) 0:00:48.828 ********* 2025-05-28 19:12:04.023260 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:04.023616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:04.024913 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:04.025740 | orchestrator | 2025-05-28 19:12:04.027197 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-28 19:12:04.027723 | orchestrator | Wednesday 28 May 2025 19:12:04 +0000 (0:00:00.182) 0:00:49.011 ********* 2025-05-28 19:12:04.537194 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:12:04.537280 | orchestrator | 2025-05-28 19:12:04.539068 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-28 19:12:04.539811 | orchestrator | Wednesday 28 May 2025 19:12:04 +0000 (0:00:00.511) 0:00:49.522 ********* 2025-05-28 19:12:05.049290 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:12:05.049463 | orchestrator | 2025-05-28 19:12:05.050204 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-28 19:12:05.050758 | orchestrator | Wednesday 28 May 2025 19:12:05 +0000 (0:00:00.515) 0:00:50.038 ********* 2025-05-28 19:12:05.384971 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:12:05.385377 | orchestrator | 2025-05-28 19:12:05.386760 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-28 19:12:05.390423 | orchestrator | Wednesday 28 May 2025 19:12:05 +0000 (0:00:00.334) 0:00:50.372 ********* 2025-05-28 19:12:05.574200 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'vg_name': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'}) 2025-05-28 19:12:05.574599 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'vg_name': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'}) 2025-05-28 19:12:05.575488 | orchestrator | 2025-05-28 19:12:05.576964 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-28 19:12:05.577124 | orchestrator | Wednesday 28 May 2025 19:12:05 +0000 (0:00:00.188) 0:00:50.561 ********* 2025-05-28 19:12:05.758557 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:05.759343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:05.760745 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:05.761479 | orchestrator | 2025-05-28 19:12:05.762218 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-28 19:12:05.762551 | orchestrator | Wednesday 28 May 2025 19:12:05 +0000 (0:00:00.184) 0:00:50.746 ********* 2025-05-28 19:12:05.950407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:05.951568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:05.952903 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:05.954967 | orchestrator | 2025-05-28 19:12:05.955018 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-28 19:12:05.955956 | orchestrator | Wednesday 28 May 2025 19:12:05 +0000 (0:00:00.192) 0:00:50.938 ********* 2025-05-28 19:12:06.129713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'})  2025-05-28 19:12:06.130345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'})  2025-05-28 19:12:06.130737 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:06.131385 | orchestrator | 2025-05-28 19:12:06.133023 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-28 19:12:06.134534 | orchestrator | Wednesday 28 May 2025 19:12:06 +0000 (0:00:00.177) 0:00:51.116 ********* 2025-05-28 19:12:07.006513 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 19:12:07.006732 | orchestrator |  "lvm_report": { 2025-05-28 19:12:07.008342 | orchestrator |  "lv": [ 2025-05-28 19:12:07.009751 | orchestrator |  { 2025-05-28 19:12:07.011817 | orchestrator |  "lv_name": "osd-block-0344b063-3cec-5ade-bfbf-9241287811af", 2025-05-28 19:12:07.012490 | orchestrator |  "vg_name": "ceph-0344b063-3cec-5ade-bfbf-9241287811af" 2025-05-28 19:12:07.014674 | orchestrator |  }, 2025-05-28 19:12:07.015847 | orchestrator |  { 2025-05-28 19:12:07.016730 | orchestrator |  "lv_name": "osd-block-3ed7399e-dc97-5c28-9f68-879666a39403", 2025-05-28 19:12:07.017210 | orchestrator |  "vg_name": "ceph-3ed7399e-dc97-5c28-9f68-879666a39403" 2025-05-28 19:12:07.018582 | orchestrator |  } 2025-05-28 19:12:07.019712 | orchestrator |  ], 2025-05-28 19:12:07.020804 | orchestrator |  "pv": [ 2025-05-28 19:12:07.021757 | orchestrator |  { 2025-05-28 19:12:07.022867 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-28 19:12:07.023642 | orchestrator |  "vg_name": "ceph-3ed7399e-dc97-5c28-9f68-879666a39403" 2025-05-28 19:12:07.024629 | orchestrator |  }, 2025-05-28 19:12:07.025116 | orchestrator |  { 2025-05-28 19:12:07.025741 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-28 19:12:07.026360 | orchestrator |  "vg_name": "ceph-0344b063-3cec-5ade-bfbf-9241287811af" 2025-05-28 19:12:07.026976 | orchestrator |  } 2025-05-28 19:12:07.027208 | orchestrator |  ] 2025-05-28 19:12:07.028071 | orchestrator |  } 2025-05-28 19:12:07.030688 | orchestrator | } 2025-05-28 19:12:07.031866 | orchestrator | 2025-05-28 19:12:07.032578 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-28 19:12:07.033235 | orchestrator | 2025-05-28 19:12:07.033718 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-28 19:12:07.034133 | orchestrator | Wednesday 28 May 2025 19:12:06 +0000 (0:00:00.875) 0:00:51.992 ********* 2025-05-28 19:12:07.243256 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-28 19:12:07.243357 | orchestrator | 2025-05-28 19:12:07.243751 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-28 19:12:07.244680 | orchestrator | Wednesday 28 May 2025 19:12:07 +0000 (0:00:00.238) 0:00:52.231 ********* 2025-05-28 19:12:07.500075 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:07.502677 | orchestrator | 2025-05-28 19:12:07.502725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:07.502739 | orchestrator | Wednesday 28 May 2025 19:12:07 +0000 (0:00:00.254) 0:00:52.486 ********* 2025-05-28 19:12:07.985509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-28 19:12:07.986002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-28 19:12:07.986895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-28 19:12:07.989622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-28 19:12:07.989654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-28 19:12:07.989981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-28 19:12:07.990960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-28 19:12:07.991889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-28 19:12:07.992670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-28 19:12:07.992986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-28 19:12:07.994280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-28 19:12:07.994377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-28 19:12:07.995309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-28 19:12:07.995337 | orchestrator | 2025-05-28 19:12:07.996162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:07.996722 | orchestrator | Wednesday 28 May 2025 19:12:07 +0000 (0:00:00.487) 0:00:52.973 ********* 2025-05-28 19:12:08.191765 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:08.192106 | orchestrator | 2025-05-28 19:12:08.192990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:08.193667 | orchestrator | Wednesday 28 May 2025 19:12:08 +0000 (0:00:00.206) 0:00:53.180 ********* 2025-05-28 19:12:08.422272 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:08.423223 | orchestrator | 2025-05-28 19:12:08.424104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:08.425021 | orchestrator | Wednesday 28 May 2025 19:12:08 +0000 (0:00:00.229) 0:00:53.410 ********* 2025-05-28 19:12:08.641417 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:08.642174 | orchestrator | 2025-05-28 19:12:08.644554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:08.644588 | orchestrator | Wednesday 28 May 2025 19:12:08 +0000 (0:00:00.218) 0:00:53.628 ********* 2025-05-28 19:12:08.855430 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:08.857254 | orchestrator | 2025-05-28 19:12:08.857692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:08.858847 | orchestrator | Wednesday 28 May 2025 19:12:08 +0000 (0:00:00.214) 0:00:53.843 ********* 2025-05-28 19:12:09.093902 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:09.095531 | orchestrator | 2025-05-28 19:12:09.097082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:09.097945 | orchestrator | Wednesday 28 May 2025 19:12:09 +0000 (0:00:00.238) 0:00:54.081 ********* 2025-05-28 19:12:09.755515 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:09.756945 | orchestrator | 2025-05-28 19:12:09.757559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:09.758533 | orchestrator | Wednesday 28 May 2025 19:12:09 +0000 (0:00:00.660) 0:00:54.741 ********* 2025-05-28 19:12:09.978366 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:09.978472 | orchestrator | 2025-05-28 19:12:09.978738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:09.979304 | orchestrator | Wednesday 28 May 2025 19:12:09 +0000 (0:00:00.223) 0:00:54.965 ********* 2025-05-28 19:12:10.202874 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:10.203492 | orchestrator | 2025-05-28 19:12:10.203972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:10.204576 | orchestrator | Wednesday 28 May 2025 19:12:10 +0000 (0:00:00.225) 0:00:55.190 ********* 2025-05-28 19:12:10.626431 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87) 2025-05-28 19:12:10.626896 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87) 2025-05-28 19:12:10.628453 | orchestrator | 2025-05-28 19:12:10.628516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:10.629351 | orchestrator | Wednesday 28 May 2025 19:12:10 +0000 (0:00:00.422) 0:00:55.613 ********* 2025-05-28 19:12:11.088110 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1) 2025-05-28 19:12:11.088214 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1) 2025-05-28 19:12:11.088557 | orchestrator | 2025-05-28 19:12:11.089454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:11.090112 | orchestrator | Wednesday 28 May 2025 19:12:11 +0000 (0:00:00.462) 0:00:56.076 ********* 2025-05-28 19:12:11.549677 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d) 2025-05-28 19:12:11.551081 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d) 2025-05-28 19:12:11.551628 | orchestrator | 2025-05-28 19:12:11.552159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:11.553549 | orchestrator | Wednesday 28 May 2025 19:12:11 +0000 (0:00:00.460) 0:00:56.536 ********* 2025-05-28 19:12:11.997421 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d) 2025-05-28 19:12:11.997555 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d) 2025-05-28 19:12:11.998916 | orchestrator | 2025-05-28 19:12:12.000038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-28 19:12:12.001006 | orchestrator | Wednesday 28 May 2025 19:12:11 +0000 (0:00:00.448) 0:00:56.985 ********* 2025-05-28 19:12:12.337719 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-28 19:12:12.338214 | orchestrator | 2025-05-28 19:12:12.339195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:12.340147 | orchestrator | Wednesday 28 May 2025 19:12:12 +0000 (0:00:00.339) 0:00:57.324 ********* 2025-05-28 19:12:12.857342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-28 19:12:12.858481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-28 19:12:12.858525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-28 19:12:12.859107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-28 19:12:12.860037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-28 19:12:12.860626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-28 19:12:12.861181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-28 19:12:12.861762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-28 19:12:12.862338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-28 19:12:12.862705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-28 19:12:12.863125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-28 19:12:12.863384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-28 19:12:12.863838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-28 19:12:12.864275 | orchestrator | 2025-05-28 19:12:12.864593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:12.864932 | orchestrator | Wednesday 28 May 2025 19:12:12 +0000 (0:00:00.518) 0:00:57.843 ********* 2025-05-28 19:12:13.487620 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:13.488637 | orchestrator | 2025-05-28 19:12:13.489656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:13.490536 | orchestrator | Wednesday 28 May 2025 19:12:13 +0000 (0:00:00.632) 0:00:58.476 ********* 2025-05-28 19:12:13.710096 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:13.710414 | orchestrator | 2025-05-28 19:12:13.711914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:13.712863 | orchestrator | Wednesday 28 May 2025 19:12:13 +0000 (0:00:00.221) 0:00:58.697 ********* 2025-05-28 19:12:13.913270 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:13.913719 | orchestrator | 2025-05-28 19:12:13.915176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:13.916215 | orchestrator | Wednesday 28 May 2025 19:12:13 +0000 (0:00:00.204) 0:00:58.901 ********* 2025-05-28 19:12:14.133913 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:14.134183 | orchestrator | 2025-05-28 19:12:14.135184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:14.135214 | orchestrator | Wednesday 28 May 2025 19:12:14 +0000 (0:00:00.220) 0:00:59.122 ********* 2025-05-28 19:12:14.342202 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:14.342733 | orchestrator | 2025-05-28 19:12:14.342794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:14.343363 | orchestrator | Wednesday 28 May 2025 19:12:14 +0000 (0:00:00.208) 0:00:59.330 ********* 2025-05-28 19:12:14.549644 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:14.549866 | orchestrator | 2025-05-28 19:12:14.550440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:14.551526 | orchestrator | Wednesday 28 May 2025 19:12:14 +0000 (0:00:00.205) 0:00:59.535 ********* 2025-05-28 19:12:14.775407 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:14.775656 | orchestrator | 2025-05-28 19:12:14.777698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:14.777829 | orchestrator | Wednesday 28 May 2025 19:12:14 +0000 (0:00:00.226) 0:00:59.762 ********* 2025-05-28 19:12:14.988123 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:14.988685 | orchestrator | 2025-05-28 19:12:14.989400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:14.991465 | orchestrator | Wednesday 28 May 2025 19:12:14 +0000 (0:00:00.214) 0:00:59.976 ********* 2025-05-28 19:12:15.876362 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-28 19:12:15.876590 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-28 19:12:15.877874 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-28 19:12:15.878472 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-28 19:12:15.879070 | orchestrator | 2025-05-28 19:12:15.880000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:15.880313 | orchestrator | Wednesday 28 May 2025 19:12:15 +0000 (0:00:00.886) 0:01:00.863 ********* 2025-05-28 19:12:16.082578 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:16.083457 | orchestrator | 2025-05-28 19:12:16.084422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:16.085378 | orchestrator | Wednesday 28 May 2025 19:12:16 +0000 (0:00:00.207) 0:01:01.071 ********* 2025-05-28 19:12:16.743039 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:16.745681 | orchestrator | 2025-05-28 19:12:16.745728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:16.745744 | orchestrator | Wednesday 28 May 2025 19:12:16 +0000 (0:00:00.657) 0:01:01.729 ********* 2025-05-28 19:12:16.970496 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:16.971116 | orchestrator | 2025-05-28 19:12:16.971850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-28 19:12:16.973075 | orchestrator | Wednesday 28 May 2025 19:12:16 +0000 (0:00:00.228) 0:01:01.958 ********* 2025-05-28 19:12:17.190300 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:17.191201 | orchestrator | 2025-05-28 19:12:17.192940 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-28 19:12:17.193645 | orchestrator | Wednesday 28 May 2025 19:12:17 +0000 (0:00:00.219) 0:01:02.177 ********* 2025-05-28 19:12:17.320329 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:17.320705 | orchestrator | 2025-05-28 19:12:17.321509 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-28 19:12:17.321908 | orchestrator | Wednesday 28 May 2025 19:12:17 +0000 (0:00:00.130) 0:01:02.308 ********* 2025-05-28 19:12:17.533430 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5db078c0-6128-52c2-9305-54ff671eda75'}}) 2025-05-28 19:12:17.534351 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}}) 2025-05-28 19:12:17.534665 | orchestrator | 2025-05-28 19:12:17.534690 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-28 19:12:17.536090 | orchestrator | Wednesday 28 May 2025 19:12:17 +0000 (0:00:00.214) 0:01:02.522 ********* 2025-05-28 19:12:19.356665 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'}) 2025-05-28 19:12:19.357474 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}) 2025-05-28 19:12:19.358804 | orchestrator | 2025-05-28 19:12:19.359073 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-28 19:12:19.359615 | orchestrator | Wednesday 28 May 2025 19:12:19 +0000 (0:00:01.820) 0:01:04.342 ********* 2025-05-28 19:12:19.533104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:19.533202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:19.533208 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:19.533483 | orchestrator | 2025-05-28 19:12:19.535250 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-28 19:12:19.535382 | orchestrator | Wednesday 28 May 2025 19:12:19 +0000 (0:00:00.177) 0:01:04.519 ********* 2025-05-28 19:12:20.857276 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'}) 2025-05-28 19:12:20.857445 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}) 2025-05-28 19:12:20.858346 | orchestrator | 2025-05-28 19:12:20.861100 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-28 19:12:20.861459 | orchestrator | Wednesday 28 May 2025 19:12:20 +0000 (0:00:01.324) 0:01:05.844 ********* 2025-05-28 19:12:21.020305 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:21.020574 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:21.021866 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:21.025001 | orchestrator | 2025-05-28 19:12:21.025027 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-28 19:12:21.025041 | orchestrator | Wednesday 28 May 2025 19:12:21 +0000 (0:00:00.164) 0:01:06.008 ********* 2025-05-28 19:12:21.385472 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:21.385838 | orchestrator | 2025-05-28 19:12:21.386449 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-28 19:12:21.387235 | orchestrator | Wednesday 28 May 2025 19:12:21 +0000 (0:00:00.365) 0:01:06.373 ********* 2025-05-28 19:12:21.564373 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:21.565156 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:21.566944 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:21.567875 | orchestrator | 2025-05-28 19:12:21.569139 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-28 19:12:21.570210 | orchestrator | Wednesday 28 May 2025 19:12:21 +0000 (0:00:00.178) 0:01:06.552 ********* 2025-05-28 19:12:21.723105 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:21.723290 | orchestrator | 2025-05-28 19:12:21.724474 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-28 19:12:21.725867 | orchestrator | Wednesday 28 May 2025 19:12:21 +0000 (0:00:00.159) 0:01:06.712 ********* 2025-05-28 19:12:21.891165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:21.891953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:21.893040 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:21.894265 | orchestrator | 2025-05-28 19:12:21.895120 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-28 19:12:21.896070 | orchestrator | Wednesday 28 May 2025 19:12:21 +0000 (0:00:00.166) 0:01:06.878 ********* 2025-05-28 19:12:22.037150 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:22.037356 | orchestrator | 2025-05-28 19:12:22.038521 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-28 19:12:22.039644 | orchestrator | Wednesday 28 May 2025 19:12:22 +0000 (0:00:00.146) 0:01:07.025 ********* 2025-05-28 19:12:22.215701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:22.216270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:22.216849 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:22.219198 | orchestrator | 2025-05-28 19:12:22.219594 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-28 19:12:22.220157 | orchestrator | Wednesday 28 May 2025 19:12:22 +0000 (0:00:00.177) 0:01:07.203 ********* 2025-05-28 19:12:22.352060 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:22.352480 | orchestrator | 2025-05-28 19:12:22.353538 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-28 19:12:22.354352 | orchestrator | Wednesday 28 May 2025 19:12:22 +0000 (0:00:00.136) 0:01:07.340 ********* 2025-05-28 19:12:22.531390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:22.533573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:22.533999 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:22.535106 | orchestrator | 2025-05-28 19:12:22.536308 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-28 19:12:22.536868 | orchestrator | Wednesday 28 May 2025 19:12:22 +0000 (0:00:00.178) 0:01:07.518 ********* 2025-05-28 19:12:22.683339 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:22.683911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:22.685156 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:22.686098 | orchestrator | 2025-05-28 19:12:22.686914 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-28 19:12:22.688135 | orchestrator | Wednesday 28 May 2025 19:12:22 +0000 (0:00:00.152) 0:01:07.671 ********* 2025-05-28 19:12:22.882644 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:22.882876 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:22.883327 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:22.884430 | orchestrator | 2025-05-28 19:12:22.884619 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-28 19:12:22.885079 | orchestrator | Wednesday 28 May 2025 19:12:22 +0000 (0:00:00.198) 0:01:07.870 ********* 2025-05-28 19:12:23.031136 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:23.031395 | orchestrator | 2025-05-28 19:12:23.034228 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-28 19:12:23.034686 | orchestrator | Wednesday 28 May 2025 19:12:23 +0000 (0:00:00.149) 0:01:08.019 ********* 2025-05-28 19:12:23.415891 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:23.416082 | orchestrator | 2025-05-28 19:12:23.417648 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-28 19:12:23.418913 | orchestrator | Wednesday 28 May 2025 19:12:23 +0000 (0:00:00.384) 0:01:08.404 ********* 2025-05-28 19:12:23.561054 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:23.561161 | orchestrator | 2025-05-28 19:12:23.561722 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-28 19:12:23.562473 | orchestrator | Wednesday 28 May 2025 19:12:23 +0000 (0:00:00.143) 0:01:08.548 ********* 2025-05-28 19:12:23.729528 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 19:12:23.729637 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-28 19:12:23.729717 | orchestrator | } 2025-05-28 19:12:23.731694 | orchestrator | 2025-05-28 19:12:23.731879 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-28 19:12:23.732704 | orchestrator | Wednesday 28 May 2025 19:12:23 +0000 (0:00:00.169) 0:01:08.717 ********* 2025-05-28 19:12:23.877537 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 19:12:23.877819 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-28 19:12:23.879008 | orchestrator | } 2025-05-28 19:12:23.880322 | orchestrator | 2025-05-28 19:12:23.882429 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-28 19:12:23.883062 | orchestrator | Wednesday 28 May 2025 19:12:23 +0000 (0:00:00.148) 0:01:08.866 ********* 2025-05-28 19:12:24.036285 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 19:12:24.036521 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-28 19:12:24.038327 | orchestrator | } 2025-05-28 19:12:24.040410 | orchestrator | 2025-05-28 19:12:24.041906 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-28 19:12:24.042660 | orchestrator | Wednesday 28 May 2025 19:12:24 +0000 (0:00:00.159) 0:01:09.025 ********* 2025-05-28 19:12:24.574067 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:24.575481 | orchestrator | 2025-05-28 19:12:24.576339 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-28 19:12:24.577876 | orchestrator | Wednesday 28 May 2025 19:12:24 +0000 (0:00:00.535) 0:01:09.560 ********* 2025-05-28 19:12:25.100823 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:25.101388 | orchestrator | 2025-05-28 19:12:25.101417 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-28 19:12:25.102470 | orchestrator | Wednesday 28 May 2025 19:12:25 +0000 (0:00:00.528) 0:01:10.089 ********* 2025-05-28 19:12:25.621535 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:25.622092 | orchestrator | 2025-05-28 19:12:25.622686 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-28 19:12:25.623161 | orchestrator | Wednesday 28 May 2025 19:12:25 +0000 (0:00:00.519) 0:01:10.608 ********* 2025-05-28 19:12:25.777666 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:25.778977 | orchestrator | 2025-05-28 19:12:25.780100 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-28 19:12:25.781376 | orchestrator | Wednesday 28 May 2025 19:12:25 +0000 (0:00:00.157) 0:01:10.766 ********* 2025-05-28 19:12:25.906266 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:25.906988 | orchestrator | 2025-05-28 19:12:25.907697 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-28 19:12:25.908758 | orchestrator | Wednesday 28 May 2025 19:12:25 +0000 (0:00:00.126) 0:01:10.892 ********* 2025-05-28 19:12:26.035418 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:26.035575 | orchestrator | 2025-05-28 19:12:26.036402 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-28 19:12:26.036600 | orchestrator | Wednesday 28 May 2025 19:12:26 +0000 (0:00:00.132) 0:01:11.024 ********* 2025-05-28 19:12:26.392434 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 19:12:26.393285 | orchestrator |  "vgs_report": { 2025-05-28 19:12:26.394236 | orchestrator |  "vg": [] 2025-05-28 19:12:26.396888 | orchestrator |  } 2025-05-28 19:12:26.396990 | orchestrator | } 2025-05-28 19:12:26.397627 | orchestrator | 2025-05-28 19:12:26.398125 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-28 19:12:26.398240 | orchestrator | Wednesday 28 May 2025 19:12:26 +0000 (0:00:00.355) 0:01:11.380 ********* 2025-05-28 19:12:26.537257 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:26.538104 | orchestrator | 2025-05-28 19:12:26.539247 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-28 19:12:26.540818 | orchestrator | Wednesday 28 May 2025 19:12:26 +0000 (0:00:00.144) 0:01:11.524 ********* 2025-05-28 19:12:26.693996 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:26.694247 | orchestrator | 2025-05-28 19:12:26.694357 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-28 19:12:26.695190 | orchestrator | Wednesday 28 May 2025 19:12:26 +0000 (0:00:00.155) 0:01:11.680 ********* 2025-05-28 19:12:26.840294 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:26.840482 | orchestrator | 2025-05-28 19:12:26.843034 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-28 19:12:26.843070 | orchestrator | Wednesday 28 May 2025 19:12:26 +0000 (0:00:00.146) 0:01:11.826 ********* 2025-05-28 19:12:27.004500 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:27.004690 | orchestrator | 2025-05-28 19:12:27.006552 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-28 19:12:27.007034 | orchestrator | Wednesday 28 May 2025 19:12:26 +0000 (0:00:00.165) 0:01:11.992 ********* 2025-05-28 19:12:27.155504 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:27.155885 | orchestrator | 2025-05-28 19:12:27.156932 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-28 19:12:27.157441 | orchestrator | Wednesday 28 May 2025 19:12:27 +0000 (0:00:00.151) 0:01:12.144 ********* 2025-05-28 19:12:27.288730 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:27.288878 | orchestrator | 2025-05-28 19:12:27.288987 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-28 19:12:27.292906 | orchestrator | Wednesday 28 May 2025 19:12:27 +0000 (0:00:00.127) 0:01:12.272 ********* 2025-05-28 19:12:27.451492 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:27.452370 | orchestrator | 2025-05-28 19:12:27.452950 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-28 19:12:27.453744 | orchestrator | Wednesday 28 May 2025 19:12:27 +0000 (0:00:00.168) 0:01:12.441 ********* 2025-05-28 19:12:27.591815 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:27.592098 | orchestrator | 2025-05-28 19:12:27.593090 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-28 19:12:27.593979 | orchestrator | Wednesday 28 May 2025 19:12:27 +0000 (0:00:00.137) 0:01:12.579 ********* 2025-05-28 19:12:27.749038 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:27.749528 | orchestrator | 2025-05-28 19:12:27.750249 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-28 19:12:27.751265 | orchestrator | Wednesday 28 May 2025 19:12:27 +0000 (0:00:00.157) 0:01:12.736 ********* 2025-05-28 19:12:27.910990 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:27.911118 | orchestrator | 2025-05-28 19:12:27.911145 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-28 19:12:27.911246 | orchestrator | Wednesday 28 May 2025 19:12:27 +0000 (0:00:00.154) 0:01:12.891 ********* 2025-05-28 19:12:28.076361 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:28.076469 | orchestrator | 2025-05-28 19:12:28.076617 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-28 19:12:28.077298 | orchestrator | Wednesday 28 May 2025 19:12:28 +0000 (0:00:00.173) 0:01:13.065 ********* 2025-05-28 19:12:28.448401 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:28.448504 | orchestrator | 2025-05-28 19:12:28.448519 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-28 19:12:28.448532 | orchestrator | Wednesday 28 May 2025 19:12:28 +0000 (0:00:00.370) 0:01:13.435 ********* 2025-05-28 19:12:28.590891 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:28.591106 | orchestrator | 2025-05-28 19:12:28.591946 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-28 19:12:28.593352 | orchestrator | Wednesday 28 May 2025 19:12:28 +0000 (0:00:00.143) 0:01:13.578 ********* 2025-05-28 19:12:28.729823 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:28.730099 | orchestrator | 2025-05-28 19:12:28.730317 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-28 19:12:28.731905 | orchestrator | Wednesday 28 May 2025 19:12:28 +0000 (0:00:00.139) 0:01:13.717 ********* 2025-05-28 19:12:28.915973 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:28.916361 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:28.918549 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:28.919563 | orchestrator | 2025-05-28 19:12:28.920354 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-28 19:12:28.921992 | orchestrator | Wednesday 28 May 2025 19:12:28 +0000 (0:00:00.187) 0:01:13.904 ********* 2025-05-28 19:12:29.096295 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:29.096495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:29.097123 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:29.097809 | orchestrator | 2025-05-28 19:12:29.098490 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-28 19:12:29.098918 | orchestrator | Wednesday 28 May 2025 19:12:29 +0000 (0:00:00.180) 0:01:14.085 ********* 2025-05-28 19:12:29.285161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:29.285843 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:29.286288 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:29.287348 | orchestrator | 2025-05-28 19:12:29.288109 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-28 19:12:29.288608 | orchestrator | Wednesday 28 May 2025 19:12:29 +0000 (0:00:00.187) 0:01:14.272 ********* 2025-05-28 19:12:29.449951 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:29.451389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:29.453164 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:29.453949 | orchestrator | 2025-05-28 19:12:29.454293 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-28 19:12:29.454552 | orchestrator | Wednesday 28 May 2025 19:12:29 +0000 (0:00:00.163) 0:01:14.436 ********* 2025-05-28 19:12:29.640963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:29.642482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:29.645581 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:29.645810 | orchestrator | 2025-05-28 19:12:29.645898 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-28 19:12:29.646305 | orchestrator | Wednesday 28 May 2025 19:12:29 +0000 (0:00:00.192) 0:01:14.628 ********* 2025-05-28 19:12:29.807112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:29.808702 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:29.809668 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:29.811436 | orchestrator | 2025-05-28 19:12:29.811470 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-28 19:12:29.812096 | orchestrator | Wednesday 28 May 2025 19:12:29 +0000 (0:00:00.166) 0:01:14.794 ********* 2025-05-28 19:12:29.997269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:29.998060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:30.000249 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:30.001512 | orchestrator | 2025-05-28 19:12:30.001943 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-28 19:12:30.003396 | orchestrator | Wednesday 28 May 2025 19:12:29 +0000 (0:00:00.190) 0:01:14.985 ********* 2025-05-28 19:12:30.184613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:30.185117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:30.185664 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:30.186560 | orchestrator | 2025-05-28 19:12:30.187127 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-28 19:12:30.187746 | orchestrator | Wednesday 28 May 2025 19:12:30 +0000 (0:00:00.188) 0:01:15.173 ********* 2025-05-28 19:12:30.929350 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:30.931120 | orchestrator | 2025-05-28 19:12:30.932804 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-28 19:12:30.932902 | orchestrator | Wednesday 28 May 2025 19:12:30 +0000 (0:00:00.742) 0:01:15.916 ********* 2025-05-28 19:12:31.452260 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:31.452410 | orchestrator | 2025-05-28 19:12:31.453509 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-28 19:12:31.454184 | orchestrator | Wednesday 28 May 2025 19:12:31 +0000 (0:00:00.523) 0:01:16.439 ********* 2025-05-28 19:12:31.663642 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:31.664231 | orchestrator | 2025-05-28 19:12:31.666876 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-28 19:12:31.666903 | orchestrator | Wednesday 28 May 2025 19:12:31 +0000 (0:00:00.210) 0:01:16.650 ********* 2025-05-28 19:12:31.872173 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'vg_name': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'}) 2025-05-28 19:12:31.872355 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'vg_name': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}) 2025-05-28 19:12:31.873184 | orchestrator | 2025-05-28 19:12:31.873569 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-28 19:12:31.874282 | orchestrator | Wednesday 28 May 2025 19:12:31 +0000 (0:00:00.210) 0:01:16.861 ********* 2025-05-28 19:12:32.064404 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:32.064458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:32.066215 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:32.067407 | orchestrator | 2025-05-28 19:12:32.068403 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-28 19:12:32.069361 | orchestrator | Wednesday 28 May 2025 19:12:32 +0000 (0:00:00.192) 0:01:17.053 ********* 2025-05-28 19:12:32.229190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:32.229363 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:32.230918 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:32.231846 | orchestrator | 2025-05-28 19:12:32.232365 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-28 19:12:32.234551 | orchestrator | Wednesday 28 May 2025 19:12:32 +0000 (0:00:00.164) 0:01:17.217 ********* 2025-05-28 19:12:32.412595 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'})  2025-05-28 19:12:32.413242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'})  2025-05-28 19:12:32.414416 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:32.415131 | orchestrator | 2025-05-28 19:12:32.416256 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-28 19:12:32.416861 | orchestrator | Wednesday 28 May 2025 19:12:32 +0000 (0:00:00.182) 0:01:17.400 ********* 2025-05-28 19:12:33.022532 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 19:12:33.022616 | orchestrator |  "lvm_report": { 2025-05-28 19:12:33.022958 | orchestrator |  "lv": [ 2025-05-28 19:12:33.023696 | orchestrator |  { 2025-05-28 19:12:33.025859 | orchestrator |  "lv_name": "osd-block-5db078c0-6128-52c2-9305-54ff671eda75", 2025-05-28 19:12:33.026680 | orchestrator |  "vg_name": "ceph-5db078c0-6128-52c2-9305-54ff671eda75" 2025-05-28 19:12:33.027445 | orchestrator |  }, 2025-05-28 19:12:33.028194 | orchestrator |  { 2025-05-28 19:12:33.029019 | orchestrator |  "lv_name": "osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076", 2025-05-28 19:12:33.030265 | orchestrator |  "vg_name": "ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076" 2025-05-28 19:12:33.031146 | orchestrator |  } 2025-05-28 19:12:33.032101 | orchestrator |  ], 2025-05-28 19:12:33.033334 | orchestrator |  "pv": [ 2025-05-28 19:12:33.033653 | orchestrator |  { 2025-05-28 19:12:33.034897 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-28 19:12:33.035571 | orchestrator |  "vg_name": "ceph-5db078c0-6128-52c2-9305-54ff671eda75" 2025-05-28 19:12:33.036287 | orchestrator |  }, 2025-05-28 19:12:33.037041 | orchestrator |  { 2025-05-28 19:12:33.037693 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-28 19:12:33.038579 | orchestrator |  "vg_name": "ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076" 2025-05-28 19:12:33.039590 | orchestrator |  } 2025-05-28 19:12:33.039943 | orchestrator |  ] 2025-05-28 19:12:33.040879 | orchestrator |  } 2025-05-28 19:12:33.041533 | orchestrator | } 2025-05-28 19:12:33.041906 | orchestrator | 2025-05-28 19:12:33.043026 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:12:33.043070 | orchestrator | 2025-05-28 19:12:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:12:33.043085 | orchestrator | 2025-05-28 19:12:33 | INFO  | Please wait and do not abort execution. 2025-05-28 19:12:33.043297 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-28 19:12:33.043987 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-28 19:12:33.044393 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-28 19:12:33.044847 | orchestrator | 2025-05-28 19:12:33.045404 | orchestrator | 2025-05-28 19:12:33.046155 | orchestrator | 2025-05-28 19:12:33.046874 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:12:33.047581 | orchestrator | Wednesday 28 May 2025 19:12:33 +0000 (0:00:00.609) 0:01:18.009 ********* 2025-05-28 19:12:33.048208 | orchestrator | =============================================================================== 2025-05-28 19:12:33.048327 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2025-05-28 19:12:33.048847 | orchestrator | Create block LVs -------------------------------------------------------- 4.03s 2025-05-28 19:12:33.049199 | orchestrator | Print LVM report data --------------------------------------------------- 2.21s 2025-05-28 19:12:33.049583 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.93s 2025-05-28 19:12:33.050057 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.80s 2025-05-28 19:12:33.050336 | orchestrator | Add known links to the list of available block devices ------------------ 1.75s 2025-05-28 19:12:33.050603 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2025-05-28 19:12:33.050877 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-05-28 19:12:33.051200 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-05-28 19:12:33.051474 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2025-05-28 19:12:33.051922 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.11s 2025-05-28 19:12:33.052109 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-05-28 19:12:33.052407 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-28 19:12:33.053374 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.77s 2025-05-28 19:12:33.054188 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.77s 2025-05-28 19:12:33.054570 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2025-05-28 19:12:33.055056 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.75s 2025-05-28 19:12:33.055647 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-05-28 19:12:33.056241 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.70s 2025-05-28 19:12:33.056744 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-05-28 19:12:35.027606 | orchestrator | 2025-05-28 19:12:35 | INFO  | Task fb68769d-2844-4dcf-9e65-e349612edda8 (facts) was prepared for execution. 2025-05-28 19:12:35.027698 | orchestrator | 2025-05-28 19:12:35 | INFO  | It takes a moment until task fb68769d-2844-4dcf-9e65-e349612edda8 (facts) has been started and output is visible here. 2025-05-28 19:12:38.256630 | orchestrator | 2025-05-28 19:12:38.256936 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-28 19:12:38.257165 | orchestrator | 2025-05-28 19:12:38.258988 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-28 19:12:38.259851 | orchestrator | Wednesday 28 May 2025 19:12:38 +0000 (0:00:00.193) 0:00:00.193 ********* 2025-05-28 19:12:39.158614 | orchestrator | ok: [testbed-manager] 2025-05-28 19:12:39.158961 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:12:39.159516 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:12:39.160956 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:12:39.161343 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:12:39.162137 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:12:39.162171 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:39.162521 | orchestrator | 2025-05-28 19:12:39.162838 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-28 19:12:39.163218 | orchestrator | Wednesday 28 May 2025 19:12:39 +0000 (0:00:00.903) 0:00:01.096 ********* 2025-05-28 19:12:39.287160 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:12:39.367066 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:12:39.443553 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:12:39.515202 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:12:39.584072 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:12:40.228342 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:40.228491 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:40.228510 | orchestrator | 2025-05-28 19:12:40.228592 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-28 19:12:40.228609 | orchestrator | 2025-05-28 19:12:40.228903 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-28 19:12:40.229097 | orchestrator | Wednesday 28 May 2025 19:12:40 +0000 (0:00:01.071) 0:00:02.168 ********* 2025-05-28 19:12:44.776004 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:12:44.776508 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:12:44.780084 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:12:44.780120 | orchestrator | ok: [testbed-manager] 2025-05-28 19:12:44.780175 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:12:44.781817 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:12:44.782476 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:12:44.783177 | orchestrator | 2025-05-28 19:12:44.783881 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-28 19:12:44.784159 | orchestrator | 2025-05-28 19:12:44.785180 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-28 19:12:44.786178 | orchestrator | Wednesday 28 May 2025 19:12:44 +0000 (0:00:04.545) 0:00:06.714 ********* 2025-05-28 19:12:45.119916 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:12:45.197437 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:12:45.272014 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:12:45.350948 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:12:45.429297 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:12:45.468024 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:12:45.468154 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:12:45.468900 | orchestrator | 2025-05-28 19:12:45.469304 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:12:45.469714 | orchestrator | 2025-05-28 19:12:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-28 19:12:45.469840 | orchestrator | 2025-05-28 19:12:45 | INFO  | Please wait and do not abort execution. 2025-05-28 19:12:45.470980 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:12:45.471734 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:12:45.472169 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:12:45.472984 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:12:45.473546 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:12:45.475263 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:12:45.476218 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:12:45.477379 | orchestrator | 2025-05-28 19:12:45.478257 | orchestrator | Wednesday 28 May 2025 19:12:45 +0000 (0:00:00.693) 0:00:07.408 ********* 2025-05-28 19:12:45.479195 | orchestrator | =============================================================================== 2025-05-28 19:12:45.479627 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.55s 2025-05-28 19:12:45.480069 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-05-28 19:12:45.480331 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.90s 2025-05-28 19:12:45.480558 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.69s 2025-05-28 19:12:46.104305 | orchestrator | 2025-05-28 19:12:46.106329 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed May 28 19:12:46 UTC 2025 2025-05-28 19:12:46.106420 | orchestrator | 2025-05-28 19:12:47.521117 | orchestrator | 2025-05-28 19:12:47 | INFO  | Collection nutshell is prepared for execution 2025-05-28 19:12:47.521923 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [0] - dotfiles 2025-05-28 19:12:47.525580 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [0] - homer 2025-05-28 19:12:47.525652 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [0] - netdata 2025-05-28 19:12:47.525667 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [0] - openstackclient 2025-05-28 19:12:47.525679 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [0] - phpmyadmin 2025-05-28 19:12:47.525749 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [0] - common 2025-05-28 19:12:47.526937 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [1] -- loadbalancer 2025-05-28 19:12:47.527016 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [2] --- opensearch 2025-05-28 19:12:47.527030 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [2] --- mariadb-ng 2025-05-28 19:12:47.527041 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [3] ---- horizon 2025-05-28 19:12:47.527052 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [3] ---- keystone 2025-05-28 19:12:47.527064 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [4] ----- neutron 2025-05-28 19:12:47.527075 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [5] ------ wait-for-nova 2025-05-28 19:12:47.527112 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [5] ------ octavia 2025-05-28 19:12:47.527187 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [4] ----- barbican 2025-05-28 19:12:47.527201 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [4] ----- designate 2025-05-28 19:12:47.527213 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [4] ----- ironic 2025-05-28 19:12:47.527224 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [4] ----- placement 2025-05-28 19:12:47.527360 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [4] ----- magnum 2025-05-28 19:12:47.527539 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [1] -- openvswitch 2025-05-28 19:12:47.527559 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [2] --- ovn 2025-05-28 19:12:47.527753 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [1] -- memcached 2025-05-28 19:12:47.527834 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [1] -- redis 2025-05-28 19:12:47.527935 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [1] -- rabbitmq-ng 2025-05-28 19:12:47.527960 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [0] - kubernetes 2025-05-28 19:12:47.528044 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [1] -- kubeconfig 2025-05-28 19:12:47.528181 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [1] -- copy-kubeconfig 2025-05-28 19:12:47.528210 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [0] - ceph 2025-05-28 19:12:47.529617 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [1] -- ceph-pools 2025-05-28 19:12:47.529666 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [2] --- copy-ceph-keys 2025-05-28 19:12:47.529710 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [3] ---- cephclient 2025-05-28 19:12:47.529732 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-28 19:12:47.529951 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [4] ----- wait-for-keystone 2025-05-28 19:12:47.530001 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-28 19:12:47.530122 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [5] ------ glance 2025-05-28 19:12:47.530146 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [5] ------ cinder 2025-05-28 19:12:47.530167 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [5] ------ nova 2025-05-28 19:12:47.530188 | orchestrator | 2025-05-28 19:12:47 | INFO  | A [4] ----- prometheus 2025-05-28 19:12:47.530206 | orchestrator | 2025-05-28 19:12:47 | INFO  | D [5] ------ grafana 2025-05-28 19:12:47.676611 | orchestrator | 2025-05-28 19:12:47 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-28 19:12:47.676716 | orchestrator | 2025-05-28 19:12:47 | INFO  | Tasks are running in the background 2025-05-28 19:12:49.641650 | orchestrator | 2025-05-28 19:12:49 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-28 19:12:51.756230 | orchestrator | 2025-05-28 19:12:51 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:12:51.757684 | orchestrator | 2025-05-28 19:12:51 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:12:51.758386 | orchestrator | 2025-05-28 19:12:51 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:12:51.759200 | orchestrator | 2025-05-28 19:12:51 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:12:51.761251 | orchestrator | 2025-05-28 19:12:51 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state STARTED 2025-05-28 19:12:51.765270 | orchestrator | 2025-05-28 19:12:51 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:12:51.765328 | orchestrator | 2025-05-28 19:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:12:54.827164 | orchestrator | 2025-05-28 19:12:54 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:12:54.830126 | orchestrator | 2025-05-28 19:12:54 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:12:54.830168 | orchestrator | 2025-05-28 19:12:54 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:12:54.830182 | orchestrator | 2025-05-28 19:12:54 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:12:54.830212 | orchestrator | 2025-05-28 19:12:54 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state STARTED 2025-05-28 19:12:54.830610 | orchestrator | 2025-05-28 19:12:54 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:12:54.830637 | orchestrator | 2025-05-28 19:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:12:57.889384 | orchestrator | 2025-05-28 19:12:57 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:12:57.892151 | orchestrator | 2025-05-28 19:12:57 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:12:57.892959 | orchestrator | 2025-05-28 19:12:57 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:12:57.897395 | orchestrator | 2025-05-28 19:12:57 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:12:57.899247 | orchestrator | 2025-05-28 19:12:57 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state STARTED 2025-05-28 19:12:57.899348 | orchestrator | 2025-05-28 19:12:57 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:12:57.901139 | orchestrator | 2025-05-28 19:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:00.965723 | orchestrator | 2025-05-28 19:13:00 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:00.969585 | orchestrator | 2025-05-28 19:13:00 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:00.975097 | orchestrator | 2025-05-28 19:13:00 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:00.977972 | orchestrator | 2025-05-28 19:13:00 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:00.979813 | orchestrator | 2025-05-28 19:13:00 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state STARTED 2025-05-28 19:13:00.984933 | orchestrator | 2025-05-28 19:13:00 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:00.984979 | orchestrator | 2025-05-28 19:13:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:04.054322 | orchestrator | 2025-05-28 19:13:04 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:04.054431 | orchestrator | 2025-05-28 19:13:04 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:04.062484 | orchestrator | 2025-05-28 19:13:04 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:04.062533 | orchestrator | 2025-05-28 19:13:04 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:04.062546 | orchestrator | 2025-05-28 19:13:04 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state STARTED 2025-05-28 19:13:04.065791 | orchestrator | 2025-05-28 19:13:04 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:04.065817 | orchestrator | 2025-05-28 19:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:07.167369 | orchestrator | 2025-05-28 19:13:07 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:07.167463 | orchestrator | 2025-05-28 19:13:07 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:07.167475 | orchestrator | 2025-05-28 19:13:07 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:07.178359 | orchestrator | 2025-05-28 19:13:07 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:07.183112 | orchestrator | 2025-05-28 19:13:07 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state STARTED 2025-05-28 19:13:07.183170 | orchestrator | 2025-05-28 19:13:07 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:07.183182 | orchestrator | 2025-05-28 19:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:10.239142 | orchestrator | 2025-05-28 19:13:10 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:10.239297 | orchestrator | 2025-05-28 19:13:10 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:10.242404 | orchestrator | 2025-05-28 19:13:10 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:10.244105 | orchestrator | 2025-05-28 19:13:10 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:10.245371 | orchestrator | 2025-05-28 19:13:10 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state STARTED 2025-05-28 19:13:10.246342 | orchestrator | 2025-05-28 19:13:10 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:10.246368 | orchestrator | 2025-05-28 19:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:13.293940 | orchestrator | 2025-05-28 19:13:13 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:13.294092 | orchestrator | 2025-05-28 19:13:13 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:13.294114 | orchestrator | 2025-05-28 19:13:13 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:13.294124 | orchestrator | 2025-05-28 19:13:13 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:13.297373 | orchestrator | 2025-05-28 19:13:13 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:13.298179 | orchestrator | 2025-05-28 19:13:13 | INFO  | Task 3b17af53-0ee6-4545-a4df-a0bf4852b30b is in state SUCCESS 2025-05-28 19:13:13.300058 | orchestrator | 2025-05-28 19:13:13.300083 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-28 19:13:13.300093 | orchestrator | 2025-05-28 19:13:13.300103 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-28 19:13:13.300112 | orchestrator | Wednesday 28 May 2025 19:12:57 +0000 (0:00:00.405) 0:00:00.405 ********* 2025-05-28 19:13:13.300121 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:13:13.300131 | orchestrator | changed: [testbed-manager] 2025-05-28 19:13:13.300140 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:13:13.300149 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:13:13.300157 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:13:13.300166 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:13:13.300175 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:13:13.300184 | orchestrator | 2025-05-28 19:13:13.300193 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-28 19:13:13.300201 | orchestrator | Wednesday 28 May 2025 19:13:02 +0000 (0:00:04.890) 0:00:05.296 ********* 2025-05-28 19:13:13.300211 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-28 19:13:13.300220 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-28 19:13:13.300228 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-28 19:13:13.300237 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-28 19:13:13.300246 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-28 19:13:13.300259 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-28 19:13:13.300268 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-28 19:13:13.300277 | orchestrator | 2025-05-28 19:13:13.300286 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-28 19:13:13.300295 | orchestrator | Wednesday 28 May 2025 19:13:05 +0000 (0:00:03.248) 0:00:08.545 ********* 2025-05-28 19:13:13.300307 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 19:13:02.822160', 'end': '2025-05-28 19:13:02.827988', 'delta': '0:00:00.005828', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 19:13:13.300324 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 19:13:03.088668', 'end': '2025-05-28 19:13:03.098868', 'delta': '0:00:00.010200', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 19:13:13.300347 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 19:13:03.469325', 'end': '2025-05-28 19:13:03.477500', 'delta': '0:00:00.008175', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 19:13:13.300376 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 19:13:03.936007', 'end': '2025-05-28 19:13:03.944671', 'delta': '0:00:00.008664', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 19:13:13.300390 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 19:13:04.381589', 'end': '2025-05-28 19:13:04.389684', 'delta': '0:00:00.008095', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 19:13:13.300400 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 19:13:04.698193', 'end': '2025-05-28 19:13:04.706590', 'delta': '0:00:00.008397', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 19:13:13.300410 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-28 19:13:04.942994', 'end': '2025-05-28 19:13:04.952724', 'delta': '0:00:00.009730', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-28 19:13:13.300425 | orchestrator | 2025-05-28 19:13:13.300435 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-28 19:13:13.300445 | orchestrator | Wednesday 28 May 2025 19:13:07 +0000 (0:00:02.135) 0:00:10.680 ********* 2025-05-28 19:13:13.300454 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-28 19:13:13.300464 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-28 19:13:13.300473 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-28 19:13:13.300482 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-28 19:13:13.300491 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-28 19:13:13.300500 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-28 19:13:13.300509 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-28 19:13:13.300518 | orchestrator | 2025-05-28 19:13:13.300527 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:13:13.300537 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:13:13.300547 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:13:13.300556 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:13:13.300571 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:13:13.300581 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:13:13.300590 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:13:13.300599 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:13:13.300608 | orchestrator | 2025-05-28 19:13:13.300617 | orchestrator | Wednesday 28 May 2025 19:13:10 +0000 (0:00:03.285) 0:00:13.966 ********* 2025-05-28 19:13:13.300626 | orchestrator | =============================================================================== 2025-05-28 19:13:13.300635 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.89s 2025-05-28 19:13:13.300644 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.29s 2025-05-28 19:13:13.300654 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.25s 2025-05-28 19:13:13.300666 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.14s 2025-05-28 19:13:13.300720 | orchestrator | 2025-05-28 19:13:13 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:13.302103 | orchestrator | 2025-05-28 19:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:16.367439 | orchestrator | 2025-05-28 19:13:16 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:16.373406 | orchestrator | 2025-05-28 19:13:16 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:16.378421 | orchestrator | 2025-05-28 19:13:16 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:16.379409 | orchestrator | 2025-05-28 19:13:16 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:16.385143 | orchestrator | 2025-05-28 19:13:16 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:16.385206 | orchestrator | 2025-05-28 19:13:16 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:16.385221 | orchestrator | 2025-05-28 19:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:19.457022 | orchestrator | 2025-05-28 19:13:19 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:19.457179 | orchestrator | 2025-05-28 19:13:19 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:19.463875 | orchestrator | 2025-05-28 19:13:19 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:19.463910 | orchestrator | 2025-05-28 19:13:19 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:19.463923 | orchestrator | 2025-05-28 19:13:19 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:19.470675 | orchestrator | 2025-05-28 19:13:19 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:19.470705 | orchestrator | 2025-05-28 19:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:22.532058 | orchestrator | 2025-05-28 19:13:22 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:22.532236 | orchestrator | 2025-05-28 19:13:22 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:22.535209 | orchestrator | 2025-05-28 19:13:22 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:22.541973 | orchestrator | 2025-05-28 19:13:22 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:22.543356 | orchestrator | 2025-05-28 19:13:22 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:22.549930 | orchestrator | 2025-05-28 19:13:22 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:22.549979 | orchestrator | 2025-05-28 19:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:25.610580 | orchestrator | 2025-05-28 19:13:25 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:25.617629 | orchestrator | 2025-05-28 19:13:25 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:25.619106 | orchestrator | 2025-05-28 19:13:25 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:25.625012 | orchestrator | 2025-05-28 19:13:25 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:25.629708 | orchestrator | 2025-05-28 19:13:25 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:25.638227 | orchestrator | 2025-05-28 19:13:25 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:25.638283 | orchestrator | 2025-05-28 19:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:28.706518 | orchestrator | 2025-05-28 19:13:28 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:28.709706 | orchestrator | 2025-05-28 19:13:28 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:28.709821 | orchestrator | 2025-05-28 19:13:28 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:28.719569 | orchestrator | 2025-05-28 19:13:28 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:28.719625 | orchestrator | 2025-05-28 19:13:28 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:28.726599 | orchestrator | 2025-05-28 19:13:28 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:28.726675 | orchestrator | 2025-05-28 19:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:31.787850 | orchestrator | 2025-05-28 19:13:31 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:31.794251 | orchestrator | 2025-05-28 19:13:31 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:31.797857 | orchestrator | 2025-05-28 19:13:31 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:31.804585 | orchestrator | 2025-05-28 19:13:31 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:31.807933 | orchestrator | 2025-05-28 19:13:31 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:31.809016 | orchestrator | 2025-05-28 19:13:31 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:31.810774 | orchestrator | 2025-05-28 19:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:34.870635 | orchestrator | 2025-05-28 19:13:34 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:34.871344 | orchestrator | 2025-05-28 19:13:34 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:34.874173 | orchestrator | 2025-05-28 19:13:34 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:34.876514 | orchestrator | 2025-05-28 19:13:34 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:34.878516 | orchestrator | 2025-05-28 19:13:34 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state STARTED 2025-05-28 19:13:34.880141 | orchestrator | 2025-05-28 19:13:34 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:34.880169 | orchestrator | 2025-05-28 19:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:38.039172 | orchestrator | 2025-05-28 19:13:38 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:38.039251 | orchestrator | 2025-05-28 19:13:38 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:38.042469 | orchestrator | 2025-05-28 19:13:38 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:38.042994 | orchestrator | 2025-05-28 19:13:38 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:38.043600 | orchestrator | 2025-05-28 19:13:38 | INFO  | Task 756cc4fe-16cb-495d-a87a-82c88357d8b5 is in state SUCCESS 2025-05-28 19:13:38.044803 | orchestrator | 2025-05-28 19:13:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:13:38.051609 | orchestrator | 2025-05-28 19:13:38 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:38.051647 | orchestrator | 2025-05-28 19:13:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:41.122609 | orchestrator | 2025-05-28 19:13:41 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:41.122991 | orchestrator | 2025-05-28 19:13:41 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:41.124961 | orchestrator | 2025-05-28 19:13:41 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:41.126065 | orchestrator | 2025-05-28 19:13:41 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:41.127016 | orchestrator | 2025-05-28 19:13:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:13:41.127124 | orchestrator | 2025-05-28 19:13:41 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:41.127391 | orchestrator | 2025-05-28 19:13:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:44.177703 | orchestrator | 2025-05-28 19:13:44 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:44.177856 | orchestrator | 2025-05-28 19:13:44 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:44.177870 | orchestrator | 2025-05-28 19:13:44 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:44.177930 | orchestrator | 2025-05-28 19:13:44 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:44.178075 | orchestrator | 2025-05-28 19:13:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED[0m 2025-05-28 19:13:44.179343 | orchestrator | 2025-05-28 19:13:44 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:44.179368 | orchestrator | 2025-05-28 19:13:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:47.240633 | orchestrator | 2025-05-28 19:13:47 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:47.240753 | orchestrator | 2025-05-28 19:13:47 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:47.240772 | orchestrator | 2025-05-28 19:13:47 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:47.242183 | orchestrator | 2025-05-28 19:13:47 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:47.243738 | orchestrator | 2025-05-28 19:13:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:13:47.244965 | orchestrator | 2025-05-28 19:13:47 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:47.248880 | orchestrator | 2025-05-28 19:13:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:50.331519 | orchestrator | 2025-05-28 19:13:50 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:50.333387 | orchestrator | 2025-05-28 19:13:50 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:50.336341 | orchestrator | 2025-05-28 19:13:50 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:50.340233 | orchestrator | 2025-05-28 19:13:50 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:50.342530 | orchestrator | 2025-05-28 19:13:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:13:50.351637 | orchestrator | 2025-05-28 19:13:50 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:50.357342 | orchestrator | 2025-05-28 19:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:53.406301 | orchestrator | 2025-05-28 19:13:53 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:53.407155 | orchestrator | 2025-05-28 19:13:53 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:53.408887 | orchestrator | 2025-05-28 19:13:53 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:53.411016 | orchestrator | 2025-05-28 19:13:53 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:53.415361 | orchestrator | 2025-05-28 19:13:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:13:53.415407 | orchestrator | 2025-05-28 19:13:53 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:53.415420 | orchestrator | 2025-05-28 19:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:56.514814 | orchestrator | 2025-05-28 19:13:56 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:56.514900 | orchestrator | 2025-05-28 19:13:56 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:56.514914 | orchestrator | 2025-05-28 19:13:56 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:56.514927 | orchestrator | 2025-05-28 19:13:56 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:56.517037 | orchestrator | 2025-05-28 19:13:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:13:56.525181 | orchestrator | 2025-05-28 19:13:56 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:56.525210 | orchestrator | 2025-05-28 19:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:13:59.610460 | orchestrator | 2025-05-28 19:13:59 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state STARTED 2025-05-28 19:13:59.610589 | orchestrator | 2025-05-28 19:13:59 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:13:59.610604 | orchestrator | 2025-05-28 19:13:59 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:13:59.610616 | orchestrator | 2025-05-28 19:13:59 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:13:59.611007 | orchestrator | 2025-05-28 19:13:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:13:59.612989 | orchestrator | 2025-05-28 19:13:59 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:13:59.613071 | orchestrator | 2025-05-28 19:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:02.663325 | orchestrator | 2025-05-28 19:14:02 | INFO  | Task d7e63b72-56fe-4ff0-a7ba-0b96418b5390 is in state SUCCESS 2025-05-28 19:14:02.664006 | orchestrator | 2025-05-28 19:14:02 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:02.664044 | orchestrator | 2025-05-28 19:14:02 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:02.664342 | orchestrator | 2025-05-28 19:14:02 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:02.665141 | orchestrator | 2025-05-28 19:14:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:02.665606 | orchestrator | 2025-05-28 19:14:02 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:14:02.665628 | orchestrator | 2025-05-28 19:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:05.709022 | orchestrator | 2025-05-28 19:14:05 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:05.709572 | orchestrator | 2025-05-28 19:14:05 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:05.709979 | orchestrator | 2025-05-28 19:14:05 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:05.711313 | orchestrator | 2025-05-28 19:14:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:05.712674 | orchestrator | 2025-05-28 19:14:05 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:14:05.712872 | orchestrator | 2025-05-28 19:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:08.764330 | orchestrator | 2025-05-28 19:14:08 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:08.766268 | orchestrator | 2025-05-28 19:14:08 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:08.767713 | orchestrator | 2025-05-28 19:14:08 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:08.769497 | orchestrator | 2025-05-28 19:14:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:08.771979 | orchestrator | 2025-05-28 19:14:08 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state STARTED 2025-05-28 19:14:08.772004 | orchestrator | 2025-05-28 19:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:11.852766 | orchestrator | 2025-05-28 19:14:11 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:11.853164 | orchestrator | 2025-05-28 19:14:11 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:11.853939 | orchestrator | 2025-05-28 19:14:11 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:11.854603 | orchestrator | 2025-05-28 19:14:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:11.857109 | orchestrator | 2025-05-28 19:14:11.857158 | orchestrator | 2025-05-28 19:14:11.857180 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-28 19:14:11.857200 | orchestrator | 2025-05-28 19:14:11.857219 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-28 19:14:11.857239 | orchestrator | Wednesday 28 May 2025 19:12:57 +0000 (0:00:00.404) 0:00:00.404 ********* 2025-05-28 19:14:11.857259 | orchestrator | ok: [testbed-manager] => { 2025-05-28 19:14:11.857278 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-28 19:14:11.857290 | orchestrator | } 2025-05-28 19:14:11.857302 | orchestrator | 2025-05-28 19:14:11.857313 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-28 19:14:11.857324 | orchestrator | Wednesday 28 May 2025 19:12:58 +0000 (0:00:00.736) 0:00:01.140 ********* 2025-05-28 19:14:11.857335 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.857347 | orchestrator | 2025-05-28 19:14:11.857358 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-28 19:14:11.857369 | orchestrator | Wednesday 28 May 2025 19:13:00 +0000 (0:00:01.999) 0:00:03.139 ********* 2025-05-28 19:14:11.857380 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-28 19:14:11.857391 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-28 19:14:11.857402 | orchestrator | 2025-05-28 19:14:11.857414 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-28 19:14:11.857425 | orchestrator | Wednesday 28 May 2025 19:13:02 +0000 (0:00:01.626) 0:00:04.766 ********* 2025-05-28 19:14:11.857436 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.857447 | orchestrator | 2025-05-28 19:14:11.857458 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-28 19:14:11.857469 | orchestrator | Wednesday 28 May 2025 19:13:06 +0000 (0:00:04.697) 0:00:09.463 ********* 2025-05-28 19:14:11.857498 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.857509 | orchestrator | 2025-05-28 19:14:11.857520 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-28 19:14:11.857531 | orchestrator | Wednesday 28 May 2025 19:13:07 +0000 (0:00:01.044) 0:00:10.508 ********* 2025-05-28 19:14:11.857542 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-28 19:14:11.857553 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.857564 | orchestrator | 2025-05-28 19:14:11.857575 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-28 19:14:11.857586 | orchestrator | Wednesday 28 May 2025 19:13:33 +0000 (0:00:25.313) 0:00:35.822 ********* 2025-05-28 19:14:11.857597 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.857608 | orchestrator | 2025-05-28 19:14:11.857619 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:14:11.857630 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.857641 | orchestrator | 2025-05-28 19:14:11.857652 | orchestrator | Wednesday 28 May 2025 19:13:35 +0000 (0:00:02.666) 0:00:38.489 ********* 2025-05-28 19:14:11.857663 | orchestrator | =============================================================================== 2025-05-28 19:14:11.857674 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.31s 2025-05-28 19:14:11.857685 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.70s 2025-05-28 19:14:11.857721 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.67s 2025-05-28 19:14:11.857733 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.99s 2025-05-28 19:14:11.857743 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.63s 2025-05-28 19:14:11.857754 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.05s 2025-05-28 19:14:11.857766 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.75s 2025-05-28 19:14:11.857776 | orchestrator | 2025-05-28 19:14:11.857787 | orchestrator | 2025-05-28 19:14:11.857805 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-28 19:14:11.857816 | orchestrator | 2025-05-28 19:14:11.857828 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-28 19:14:11.857839 | orchestrator | Wednesday 28 May 2025 19:12:57 +0000 (0:00:00.501) 0:00:00.501 ********* 2025-05-28 19:14:11.857850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-28 19:14:11.857861 | orchestrator | 2025-05-28 19:14:11.857872 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-28 19:14:11.857883 | orchestrator | Wednesday 28 May 2025 19:12:57 +0000 (0:00:00.840) 0:00:01.342 ********* 2025-05-28 19:14:11.857894 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-28 19:14:11.857905 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-28 19:14:11.857916 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-28 19:14:11.857926 | orchestrator | 2025-05-28 19:14:11.857937 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-28 19:14:11.857948 | orchestrator | Wednesday 28 May 2025 19:13:00 +0000 (0:00:02.597) 0:00:03.940 ********* 2025-05-28 19:14:11.857959 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.857970 | orchestrator | 2025-05-28 19:14:11.857981 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-28 19:14:11.857992 | orchestrator | Wednesday 28 May 2025 19:13:02 +0000 (0:00:02.193) 0:00:06.134 ********* 2025-05-28 19:14:11.858003 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-28 19:14:11.858014 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.858095 | orchestrator | 2025-05-28 19:14:11.858120 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-28 19:14:11.858132 | orchestrator | Wednesday 28 May 2025 19:13:51 +0000 (0:00:49.066) 0:00:55.200 ********* 2025-05-28 19:14:11.858143 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.858154 | orchestrator | 2025-05-28 19:14:11.858165 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-28 19:14:11.858176 | orchestrator | Wednesday 28 May 2025 19:13:53 +0000 (0:00:02.088) 0:00:57.289 ********* 2025-05-28 19:14:11.858190 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.858210 | orchestrator | 2025-05-28 19:14:11.858230 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-28 19:14:11.858249 | orchestrator | Wednesday 28 May 2025 19:13:55 +0000 (0:00:01.237) 0:00:58.527 ********* 2025-05-28 19:14:11.858270 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.858289 | orchestrator | 2025-05-28 19:14:11.858310 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-28 19:14:11.858331 | orchestrator | Wednesday 28 May 2025 19:13:57 +0000 (0:00:02.443) 0:01:00.971 ********* 2025-05-28 19:14:11.858352 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.858371 | orchestrator | 2025-05-28 19:14:11.858391 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-28 19:14:11.858417 | orchestrator | Wednesday 28 May 2025 19:13:58 +0000 (0:00:01.439) 0:01:02.411 ********* 2025-05-28 19:14:11.858441 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.858461 | orchestrator | 2025-05-28 19:14:11.858480 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-28 19:14:11.858491 | orchestrator | Wednesday 28 May 2025 19:13:59 +0000 (0:00:00.918) 0:01:03.329 ********* 2025-05-28 19:14:11.858502 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.858513 | orchestrator | 2025-05-28 19:14:11.858524 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:14:11.858536 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.858546 | orchestrator | 2025-05-28 19:14:11.858557 | orchestrator | Wednesday 28 May 2025 19:14:00 +0000 (0:00:00.827) 0:01:04.156 ********* 2025-05-28 19:14:11.858568 | orchestrator | =============================================================================== 2025-05-28 19:14:11.858579 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 49.07s 2025-05-28 19:14:11.858590 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.60s 2025-05-28 19:14:11.858601 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.44s 2025-05-28 19:14:11.858611 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.19s 2025-05-28 19:14:11.858622 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.09s 2025-05-28 19:14:11.858633 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.44s 2025-05-28 19:14:11.858644 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.24s 2025-05-28 19:14:11.858655 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.92s 2025-05-28 19:14:11.858665 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.84s 2025-05-28 19:14:11.858676 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.83s 2025-05-28 19:14:11.858687 | orchestrator | 2025-05-28 19:14:11.858723 | orchestrator | 2025-05-28 19:14:11.858734 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:14:11.858745 | orchestrator | 2025-05-28 19:14:11.858755 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:14:11.858766 | orchestrator | Wednesday 28 May 2025 19:12:55 +0000 (0:00:00.356) 0:00:00.356 ********* 2025-05-28 19:14:11.858776 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-28 19:14:11.858797 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-28 19:14:11.858813 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-28 19:14:11.858824 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-28 19:14:11.858835 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-28 19:14:11.858846 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-28 19:14:11.858857 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-28 19:14:11.858867 | orchestrator | 2025-05-28 19:14:11.858878 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-28 19:14:11.858889 | orchestrator | 2025-05-28 19:14:11.858900 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-28 19:14:11.858910 | orchestrator | Wednesday 28 May 2025 19:12:57 +0000 (0:00:02.342) 0:00:02.699 ********* 2025-05-28 19:14:11.858934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:14:11.858947 | orchestrator | 2025-05-28 19:14:11.858959 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-28 19:14:11.858969 | orchestrator | Wednesday 28 May 2025 19:13:00 +0000 (0:00:03.074) 0:00:05.774 ********* 2025-05-28 19:14:11.858980 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:14:11.858991 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:14:11.859002 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.859013 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:14:11.859024 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:14:11.859035 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:14:11.859046 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:14:11.859057 | orchestrator | 2025-05-28 19:14:11.859068 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-28 19:14:11.859088 | orchestrator | Wednesday 28 May 2025 19:13:03 +0000 (0:00:02.536) 0:00:08.310 ********* 2025-05-28 19:14:11.859100 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:14:11.859111 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.859122 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:14:11.859133 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:14:11.859143 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:14:11.859154 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:14:11.859165 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:14:11.859176 | orchestrator | 2025-05-28 19:14:11.859187 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-28 19:14:11.859198 | orchestrator | Wednesday 28 May 2025 19:13:08 +0000 (0:00:04.954) 0:00:13.264 ********* 2025-05-28 19:14:11.859210 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.859221 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:14:11.859232 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:14:11.859243 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:14:11.859254 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:14:11.859265 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:14:11.859276 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:14:11.859287 | orchestrator | 2025-05-28 19:14:11.859298 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-28 19:14:11.859309 | orchestrator | Wednesday 28 May 2025 19:13:11 +0000 (0:00:03.031) 0:00:16.296 ********* 2025-05-28 19:14:11.859325 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:14:11.859343 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:14:11.859362 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:14:11.859381 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:14:11.859407 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:14:11.859430 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:14:11.859448 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.859467 | orchestrator | 2025-05-28 19:14:11.859497 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-28 19:14:11.859515 | orchestrator | Wednesday 28 May 2025 19:13:21 +0000 (0:00:09.788) 0:00:26.084 ********* 2025-05-28 19:14:11.859533 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:14:11.859551 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:14:11.859569 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:14:11.859597 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:14:11.859617 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:14:11.859633 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:14:11.859651 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.859669 | orchestrator | 2025-05-28 19:14:11.859715 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-28 19:14:11.859736 | orchestrator | Wednesday 28 May 2025 19:13:39 +0000 (0:00:18.363) 0:00:44.448 ********* 2025-05-28 19:14:11.859754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:14:11.859772 | orchestrator | 2025-05-28 19:14:11.859789 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-28 19:14:11.859806 | orchestrator | Wednesday 28 May 2025 19:13:41 +0000 (0:00:01.809) 0:00:46.257 ********* 2025-05-28 19:14:11.859823 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-28 19:14:11.859841 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-28 19:14:11.859858 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-28 19:14:11.859875 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-28 19:14:11.859892 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-28 19:14:11.859909 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-28 19:14:11.859926 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-28 19:14:11.859945 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-28 19:14:11.859964 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-28 19:14:11.859981 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-28 19:14:11.859999 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-28 19:14:11.860018 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-28 19:14:11.860037 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-28 19:14:11.860056 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-28 19:14:11.860068 | orchestrator | 2025-05-28 19:14:11.860079 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-28 19:14:11.860091 | orchestrator | Wednesday 28 May 2025 19:13:47 +0000 (0:00:06.723) 0:00:52.981 ********* 2025-05-28 19:14:11.860102 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.860113 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:14:11.860124 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:14:11.860135 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:14:11.860145 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:14:11.860156 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:14:11.860167 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:14:11.860177 | orchestrator | 2025-05-28 19:14:11.860188 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-28 19:14:11.860199 | orchestrator | Wednesday 28 May 2025 19:13:50 +0000 (0:00:02.077) 0:00:55.059 ********* 2025-05-28 19:14:11.860210 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:14:11.860221 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.860232 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:14:11.860242 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:14:11.860253 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:14:11.860264 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:14:11.860275 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:14:11.860303 | orchestrator | 2025-05-28 19:14:11.860314 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-28 19:14:11.860325 | orchestrator | Wednesday 28 May 2025 19:13:54 +0000 (0:00:04.695) 0:00:59.754 ********* 2025-05-28 19:14:11.860336 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:14:11.860348 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.860358 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:14:11.860369 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:14:11.860392 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:14:11.860403 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:14:11.860414 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:14:11.860425 | orchestrator | 2025-05-28 19:14:11.860436 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-28 19:14:11.860447 | orchestrator | Wednesday 28 May 2025 19:13:57 +0000 (0:00:03.067) 0:01:02.822 ********* 2025-05-28 19:14:11.860458 | orchestrator | ok: [testbed-manager] 2025-05-28 19:14:11.860469 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:14:11.860480 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:14:11.860491 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:14:11.860501 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:14:11.860512 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:14:11.860523 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:14:11.860534 | orchestrator | 2025-05-28 19:14:11.860545 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-28 19:14:11.860556 | orchestrator | Wednesday 28 May 2025 19:14:01 +0000 (0:00:03.720) 0:01:06.542 ********* 2025-05-28 19:14:11.860567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-28 19:14:11.860580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:14:11.860591 | orchestrator | 2025-05-28 19:14:11.860602 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-28 19:14:11.860621 | orchestrator | Wednesday 28 May 2025 19:14:03 +0000 (0:00:02.155) 0:01:08.698 ********* 2025-05-28 19:14:11.860640 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.860658 | orchestrator | 2025-05-28 19:14:11.860685 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-28 19:14:11.860782 | orchestrator | Wednesday 28 May 2025 19:14:06 +0000 (0:00:03.242) 0:01:11.941 ********* 2025-05-28 19:14:11.860797 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:14:11.860808 | orchestrator | changed: [testbed-manager] 2025-05-28 19:14:11.860819 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:14:11.860830 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:14:11.860840 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:14:11.860851 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:14:11.860862 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:14:11.860873 | orchestrator | 2025-05-28 19:14:11.860884 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:14:11.860895 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.860907 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.860918 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.860930 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.860942 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.860962 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.861041 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:14:11.861066 | orchestrator | 2025-05-28 19:14:11.861087 | orchestrator | Wednesday 28 May 2025 19:14:10 +0000 (0:00:03.861) 0:01:15.802 ********* 2025-05-28 19:14:11.861102 | orchestrator | =============================================================================== 2025-05-28 19:14:11.861118 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.36s 2025-05-28 19:14:11.861134 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.79s 2025-05-28 19:14:11.861150 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.72s 2025-05-28 19:14:11.861167 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.95s 2025-05-28 19:14:11.861183 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 4.70s 2025-05-28 19:14:11.861200 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.86s 2025-05-28 19:14:11.861217 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.72s 2025-05-28 19:14:11.861234 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.24s 2025-05-28 19:14:11.861251 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.07s 2025-05-28 19:14:11.861267 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.07s 2025-05-28 19:14:11.861283 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.03s 2025-05-28 19:14:11.861293 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.54s 2025-05-28 19:14:11.861302 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.34s 2025-05-28 19:14:11.861312 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.16s 2025-05-28 19:14:11.861332 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.08s 2025-05-28 19:14:11.861342 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.81s 2025-05-28 19:14:11.861352 | orchestrator | 2025-05-28 19:14:11 | INFO  | Task 1781ca9b-bc9c-49e3-8f59-a593f64801e7 is in state SUCCESS 2025-05-28 19:14:11.861362 | orchestrator | 2025-05-28 19:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:14.930278 | orchestrator | 2025-05-28 19:14:14 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:14.930384 | orchestrator | 2025-05-28 19:14:14 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:14.931506 | orchestrator | 2025-05-28 19:14:14 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:14.940428 | orchestrator | 2025-05-28 19:14:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:14.940493 | orchestrator | 2025-05-28 19:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:17.983030 | orchestrator | 2025-05-28 19:14:17 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:17.985449 | orchestrator | 2025-05-28 19:14:17 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:17.986259 | orchestrator | 2025-05-28 19:14:17 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:17.993541 | orchestrator | 2025-05-28 19:14:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:17.993631 | orchestrator | 2025-05-28 19:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:21.045450 | orchestrator | 2025-05-28 19:14:21 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:21.045586 | orchestrator | 2025-05-28 19:14:21 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:21.046922 | orchestrator | 2025-05-28 19:14:21 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:21.050397 | orchestrator | 2025-05-28 19:14:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:21.050450 | orchestrator | 2025-05-28 19:14:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:24.086868 | orchestrator | 2025-05-28 19:14:24 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:24.087045 | orchestrator | 2025-05-28 19:14:24 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:24.089946 | orchestrator | 2025-05-28 19:14:24 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:24.089997 | orchestrator | 2025-05-28 19:14:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:24.090010 | orchestrator | 2025-05-28 19:14:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:27.129355 | orchestrator | 2025-05-28 19:14:27 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:27.129847 | orchestrator | 2025-05-28 19:14:27 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:27.131965 | orchestrator | 2025-05-28 19:14:27 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:27.132750 | orchestrator | 2025-05-28 19:14:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:27.132787 | orchestrator | 2025-05-28 19:14:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:30.196352 | orchestrator | 2025-05-28 19:14:30 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:30.201354 | orchestrator | 2025-05-28 19:14:30 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:30.206862 | orchestrator | 2025-05-28 19:14:30 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:30.215201 | orchestrator | 2025-05-28 19:14:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:30.215228 | orchestrator | 2025-05-28 19:14:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:33.256086 | orchestrator | 2025-05-28 19:14:33 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:33.256194 | orchestrator | 2025-05-28 19:14:33 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:33.256856 | orchestrator | 2025-05-28 19:14:33 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:33.257220 | orchestrator | 2025-05-28 19:14:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:33.257526 | orchestrator | 2025-05-28 19:14:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:36.308568 | orchestrator | 2025-05-28 19:14:36 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:36.309929 | orchestrator | 2025-05-28 19:14:36 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:36.315312 | orchestrator | 2025-05-28 19:14:36 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:36.319075 | orchestrator | 2025-05-28 19:14:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:36.319127 | orchestrator | 2025-05-28 19:14:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:39.405695 | orchestrator | 2025-05-28 19:14:39 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:39.405803 | orchestrator | 2025-05-28 19:14:39 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:39.411498 | orchestrator | 2025-05-28 19:14:39 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:39.412369 | orchestrator | 2025-05-28 19:14:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:39.412396 | orchestrator | 2025-05-28 19:14:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:42.461030 | orchestrator | 2025-05-28 19:14:42 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:42.461775 | orchestrator | 2025-05-28 19:14:42 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:42.462904 | orchestrator | 2025-05-28 19:14:42 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:42.464243 | orchestrator | 2025-05-28 19:14:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:42.464481 | orchestrator | 2025-05-28 19:14:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:45.522255 | orchestrator | 2025-05-28 19:14:45 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:45.522507 | orchestrator | 2025-05-28 19:14:45 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:45.523437 | orchestrator | 2025-05-28 19:14:45 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:45.523744 | orchestrator | 2025-05-28 19:14:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:45.523769 | orchestrator | 2025-05-28 19:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:48.560613 | orchestrator | 2025-05-28 19:14:48 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:48.563930 | orchestrator | 2025-05-28 19:14:48 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state STARTED 2025-05-28 19:14:48.563961 | orchestrator | 2025-05-28 19:14:48 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:48.563974 | orchestrator | 2025-05-28 19:14:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:48.563985 | orchestrator | 2025-05-28 19:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:51.610243 | orchestrator | 2025-05-28 19:14:51 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:51.611035 | orchestrator | 2025-05-28 19:14:51 | INFO  | Task 9e2590c9-1708-4cb6-9743-cc8a2c1f6a2e is in state SUCCESS 2025-05-28 19:14:51.612436 | orchestrator | 2025-05-28 19:14:51 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:51.613036 | orchestrator | 2025-05-28 19:14:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:51.613055 | orchestrator | 2025-05-28 19:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:54.653620 | orchestrator | 2025-05-28 19:14:54 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:54.656280 | orchestrator | 2025-05-28 19:14:54 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:54.658779 | orchestrator | 2025-05-28 19:14:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:54.658823 | orchestrator | 2025-05-28 19:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:14:57.695966 | orchestrator | 2025-05-28 19:14:57 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:14:57.697116 | orchestrator | 2025-05-28 19:14:57 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:14:57.697721 | orchestrator | 2025-05-28 19:14:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:14:57.698353 | orchestrator | 2025-05-28 19:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:00.746628 | orchestrator | 2025-05-28 19:15:00 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:00.746851 | orchestrator | 2025-05-28 19:15:00 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:00.747590 | orchestrator | 2025-05-28 19:15:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:00.748775 | orchestrator | 2025-05-28 19:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:03.787120 | orchestrator | 2025-05-28 19:15:03 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:03.789027 | orchestrator | 2025-05-28 19:15:03 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:03.792986 | orchestrator | 2025-05-28 19:15:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:03.793039 | orchestrator | 2025-05-28 19:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:06.836841 | orchestrator | 2025-05-28 19:15:06 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:06.841077 | orchestrator | 2025-05-28 19:15:06 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:06.842239 | orchestrator | 2025-05-28 19:15:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:06.842262 | orchestrator | 2025-05-28 19:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:09.878146 | orchestrator | 2025-05-28 19:15:09 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:09.879449 | orchestrator | 2025-05-28 19:15:09 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:09.883093 | orchestrator | 2025-05-28 19:15:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:09.883158 | orchestrator | 2025-05-28 19:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:12.932822 | orchestrator | 2025-05-28 19:15:12 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:12.936734 | orchestrator | 2025-05-28 19:15:12 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:12.940177 | orchestrator | 2025-05-28 19:15:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:12.940260 | orchestrator | 2025-05-28 19:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:15.982298 | orchestrator | 2025-05-28 19:15:15 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:15.982405 | orchestrator | 2025-05-28 19:15:15 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:15.982800 | orchestrator | 2025-05-28 19:15:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:15.983172 | orchestrator | 2025-05-28 19:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:19.033405 | orchestrator | 2025-05-28 19:15:19 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:19.033918 | orchestrator | 2025-05-28 19:15:19 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:19.034309 | orchestrator | 2025-05-28 19:15:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:19.034493 | orchestrator | 2025-05-28 19:15:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:22.090791 | orchestrator | 2025-05-28 19:15:22 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:22.090901 | orchestrator | 2025-05-28 19:15:22 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:22.092798 | orchestrator | 2025-05-28 19:15:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:22.092836 | orchestrator | 2025-05-28 19:15:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:25.148447 | orchestrator | 2025-05-28 19:15:25 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state STARTED 2025-05-28 19:15:25.148832 | orchestrator | 2025-05-28 19:15:25 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:25.152422 | orchestrator | 2025-05-28 19:15:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:25.152492 | orchestrator | 2025-05-28 19:15:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:28.211980 | orchestrator | 2025-05-28 19:15:28.212064 | orchestrator | 2025-05-28 19:15:28.212080 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-28 19:15:28.212093 | orchestrator | 2025-05-28 19:15:28.212105 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-28 19:15:28.212116 | orchestrator | Wednesday 28 May 2025 19:13:16 +0000 (0:00:00.300) 0:00:00.300 ********* 2025-05-28 19:15:28.212128 | orchestrator | ok: [testbed-manager] 2025-05-28 19:15:28.212140 | orchestrator | 2025-05-28 19:15:28.212151 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-28 19:15:28.212162 | orchestrator | Wednesday 28 May 2025 19:13:18 +0000 (0:00:01.357) 0:00:01.657 ********* 2025-05-28 19:15:28.212173 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-28 19:15:28.212185 | orchestrator | 2025-05-28 19:15:28.212196 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-28 19:15:28.212207 | orchestrator | Wednesday 28 May 2025 19:13:18 +0000 (0:00:00.863) 0:00:02.520 ********* 2025-05-28 19:15:28.212218 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.212229 | orchestrator | 2025-05-28 19:15:28.212240 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-28 19:15:28.212251 | orchestrator | Wednesday 28 May 2025 19:13:21 +0000 (0:00:02.048) 0:00:04.569 ********* 2025-05-28 19:15:28.212262 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-28 19:15:28.212274 | orchestrator | ok: [testbed-manager] 2025-05-28 19:15:28.212285 | orchestrator | 2025-05-28 19:15:28.212296 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-28 19:15:28.212307 | orchestrator | Wednesday 28 May 2025 19:14:36 +0000 (0:01:15.528) 0:01:20.097 ********* 2025-05-28 19:15:28.212318 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.212328 | orchestrator | 2025-05-28 19:15:28.212339 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:15:28.212351 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:15:28.212382 | orchestrator | 2025-05-28 19:15:28.212394 | orchestrator | Wednesday 28 May 2025 19:14:50 +0000 (0:00:14.078) 0:01:34.176 ********* 2025-05-28 19:15:28.212405 | orchestrator | =============================================================================== 2025-05-28 19:15:28.212416 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 75.53s 2025-05-28 19:15:28.212427 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 14.08s 2025-05-28 19:15:28.212438 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.05s 2025-05-28 19:15:28.212449 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.36s 2025-05-28 19:15:28.212460 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.86s 2025-05-28 19:15:28.212470 | orchestrator | 2025-05-28 19:15:28.212481 | orchestrator | 2025-05-28 19:15:28.212492 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-28 19:15:28.212503 | orchestrator | 2025-05-28 19:15:28.212514 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-28 19:15:28.212525 | orchestrator | Wednesday 28 May 2025 19:12:50 +0000 (0:00:00.385) 0:00:00.385 ********* 2025-05-28 19:15:28.212543 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:15:28.212557 | orchestrator | 2025-05-28 19:15:28.212569 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-28 19:15:28.212582 | orchestrator | Wednesday 28 May 2025 19:12:52 +0000 (0:00:01.872) 0:00:02.258 ********* 2025-05-28 19:15:28.212594 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 19:15:28.212607 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 19:15:28.212619 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 19:15:28.212654 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 19:15:28.212667 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 19:15:28.212678 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 19:15:28.212690 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 19:15:28.212702 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 19:15:28.212714 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 19:15:28.212728 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 19:15:28.212740 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 19:15:28.212752 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 19:15:28.212764 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 19:15:28.212777 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 19:15:28.212789 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-28 19:15:28.212800 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 19:15:28.212811 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 19:15:28.212836 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 19:15:28.212849 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-28 19:15:28.212860 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 19:15:28.212871 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-28 19:15:28.212889 | orchestrator | 2025-05-28 19:15:28.212900 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-28 19:15:28.212911 | orchestrator | Wednesday 28 May 2025 19:12:57 +0000 (0:00:04.991) 0:00:07.249 ********* 2025-05-28 19:15:28.212922 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:15:28.212934 | orchestrator | 2025-05-28 19:15:28.212945 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-28 19:15:28.212957 | orchestrator | Wednesday 28 May 2025 19:13:00 +0000 (0:00:02.773) 0:00:10.022 ********* 2025-05-28 19:15:28.212973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.212989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.213006 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.213018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.213030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.213041 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.213068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213093 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.213158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213217 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.213286 | orchestrator | 2025-05-28 19:15:28.213297 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-28 19:15:28.213308 | orchestrator | Wednesday 28 May 2025 19:13:07 +0000 (0:00:06.662) 0:00:16.685 ********* 2025-05-28 19:15:28.213326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213339 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213402 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:15:28.213414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213469 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:15:28.213481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213520 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:15:28.213531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213572 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:15:28.213583 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:15:28.213600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213663 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:15:28.213675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213721 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:15:28.213732 | orchestrator | 2025-05-28 19:15:28.213743 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-28 19:15:28.213755 | orchestrator | Wednesday 28 May 2025 19:13:09 +0000 (0:00:01.937) 0:00:18.623 ********* 2025-05-28 19:15:28.213766 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213785 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213797 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213809 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:15:28.213820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.213879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.213921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timez2025-05-28 19:15:28 | INFO  | Task bf8ec1ea-434f-4cef-b39e-5732175b8c4c is in state SUCCESS 2025-05-28 19:15:28.214417 | orchestrator | one:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214439 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:15:28.214450 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:15:28.214462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.214474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214497 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:15:28.214514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.214533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214557 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:15:28.214568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.214588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214612 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:15:28.214674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-28 19:15:28.214687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.214718 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:15:28.214729 | orchestrator | 2025-05-28 19:15:28.214741 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-28 19:15:28.214752 | orchestrator | Wednesday 28 May 2025 19:13:11 +0000 (0:00:02.430) 0:00:21.053 ********* 2025-05-28 19:15:28.214763 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:15:28.214774 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:15:28.214785 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:15:28.214796 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:15:28.214807 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:15:28.214818 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:15:28.214829 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:15:28.214840 | orchestrator | 2025-05-28 19:15:28.214851 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-28 19:15:28.214862 | orchestrator | Wednesday 28 May 2025 19:13:12 +0000 (0:00:01.180) 0:00:22.235 ********* 2025-05-28 19:15:28.214873 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:15:28.214883 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:15:28.214894 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:15:28.214905 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:15:28.214916 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:15:28.214927 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:15:28.214938 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:15:28.214948 | orchestrator | 2025-05-28 19:15:28.214959 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-28 19:15:28.214969 | orchestrator | Wednesday 28 May 2025 19:13:13 +0000 (0:00:01.006) 0:00:23.242 ********* 2025-05-28 19:15:28.214979 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:15:28.214989 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.214998 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.215008 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.215018 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.215029 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.215040 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.215051 | orchestrator | 2025-05-28 19:15:28.215062 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-28 19:15:28.215073 | orchestrator | Wednesday 28 May 2025 19:13:50 +0000 (0:00:37.119) 0:01:00.363 ********* 2025-05-28 19:15:28.215084 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:15:28.215101 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:15:28.215112 | orchestrator | ok: [testbed-manager] 2025-05-28 19:15:28.215123 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:15:28.215134 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:15:28.215145 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:15:28.215156 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:15:28.215167 | orchestrator | 2025-05-28 19:15:28.215178 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-28 19:15:28.215189 | orchestrator | Wednesday 28 May 2025 19:13:54 +0000 (0:00:04.050) 0:01:04.413 ********* 2025-05-28 19:15:28.215200 | orchestrator | ok: [testbed-manager] 2025-05-28 19:15:28.215216 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:15:28.215226 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:15:28.215237 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:15:28.215248 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:15:28.215259 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:15:28.215269 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:15:28.215280 | orchestrator | 2025-05-28 19:15:28.215291 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-28 19:15:28.215302 | orchestrator | Wednesday 28 May 2025 19:13:56 +0000 (0:00:01.380) 0:01:05.794 ********* 2025-05-28 19:15:28.215313 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:15:28.215324 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:15:28.215335 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:15:28.215345 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:15:28.215356 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:15:28.215368 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:15:28.215384 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:15:28.215394 | orchestrator | 2025-05-28 19:15:28.215404 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-28 19:15:28.215415 | orchestrator | Wednesday 28 May 2025 19:13:57 +0000 (0:00:01.413) 0:01:07.207 ********* 2025-05-28 19:15:28.215424 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:15:28.215434 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:15:28.215444 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:15:28.215453 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:15:28.215463 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:15:28.215472 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:15:28.215482 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:15:28.215492 | orchestrator | 2025-05-28 19:15:28.215501 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-28 19:15:28.215511 | orchestrator | Wednesday 28 May 2025 19:13:58 +0000 (0:00:01.061) 0:01:08.268 ********* 2025-05-28 19:15:28.215521 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.215539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.215550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.215560 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.215592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.215613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.215652 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.215684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.215808 | orchestrator | 2025-05-28 19:15:28.215818 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-28 19:15:28.215828 | orchestrator | Wednesday 28 May 2025 19:14:04 +0000 (0:00:05.461) 0:01:13.730 ********* 2025-05-28 19:15:28.215837 | orchestrator | [WARNING]: Skipped 2025-05-28 19:15:28.215847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-28 19:15:28.215857 | orchestrator | to this access issue: 2025-05-28 19:15:28.215867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-28 19:15:28.215877 | orchestrator | directory 2025-05-28 19:15:28.215886 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:15:28.215896 | orchestrator | 2025-05-28 19:15:28.215906 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-28 19:15:28.215916 | orchestrator | Wednesday 28 May 2025 19:14:05 +0000 (0:00:00.930) 0:01:14.660 ********* 2025-05-28 19:15:28.215925 | orchestrator | [WARNING]: Skipped 2025-05-28 19:15:28.215935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-28 19:15:28.215944 | orchestrator | to this access issue: 2025-05-28 19:15:28.215954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-28 19:15:28.215964 | orchestrator | directory 2025-05-28 19:15:28.215974 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:15:28.215984 | orchestrator | 2025-05-28 19:15:28.215993 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-28 19:15:28.216003 | orchestrator | Wednesday 28 May 2025 19:14:05 +0000 (0:00:00.527) 0:01:15.188 ********* 2025-05-28 19:15:28.216012 | orchestrator | [WARNING]: Skipped 2025-05-28 19:15:28.216022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-28 19:15:28.216032 | orchestrator | to this access issue: 2025-05-28 19:15:28.216041 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-28 19:15:28.216051 | orchestrator | directory 2025-05-28 19:15:28.216061 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:15:28.216071 | orchestrator | 2025-05-28 19:15:28.216080 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-28 19:15:28.216090 | orchestrator | Wednesday 28 May 2025 19:14:06 +0000 (0:00:00.728) 0:01:15.916 ********* 2025-05-28 19:15:28.216100 | orchestrator | [WARNING]: Skipped 2025-05-28 19:15:28.216118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-28 19:15:28.216128 | orchestrator | to this access issue: 2025-05-28 19:15:28.216137 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-28 19:15:28.216147 | orchestrator | directory 2025-05-28 19:15:28.216157 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:15:28.216166 | orchestrator | 2025-05-28 19:15:28.216176 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-28 19:15:28.216186 | orchestrator | Wednesday 28 May 2025 19:14:07 +0000 (0:00:01.182) 0:01:17.099 ********* 2025-05-28 19:15:28.216196 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.216206 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:15:28.216216 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.216226 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.216235 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.216245 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.216255 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.216264 | orchestrator | 2025-05-28 19:15:28.216274 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-28 19:15:28.216284 | orchestrator | Wednesday 28 May 2025 19:14:13 +0000 (0:00:05.757) 0:01:22.857 ********* 2025-05-28 19:15:28.216293 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 19:15:28.216303 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 19:15:28.216313 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 19:15:28.216323 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 19:15:28.216333 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 19:15:28.216343 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 19:15:28.216352 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-28 19:15:28.216362 | orchestrator | 2025-05-28 19:15:28.216372 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-28 19:15:28.216382 | orchestrator | Wednesday 28 May 2025 19:14:17 +0000 (0:00:03.692) 0:01:26.550 ********* 2025-05-28 19:15:28.216392 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.216401 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:15:28.216411 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.216421 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.216430 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.216444 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.216454 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.216464 | orchestrator | 2025-05-28 19:15:28.216474 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-28 19:15:28.216484 | orchestrator | Wednesday 28 May 2025 19:14:20 +0000 (0:00:03.242) 0:01:29.792 ********* 2025-05-28 19:15:28.216494 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216505 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.216520 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.216545 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.216568 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.216579 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.216606 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.216676 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.216696 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.216707 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.216736 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.216749 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.216758 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.216766 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:15:28.216786 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.216795 | orchestrator | 2025-05-28 19:15:28.216803 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-28 19:15:28.216811 | orchestrator | Wednesday 28 May 2025 19:14:22 +0000 (0:00:02.484) 0:01:32.277 ********* 2025-05-28 19:15:28.216819 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 19:15:28.216827 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 19:15:28.216835 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 19:15:28.216843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 19:15:28.216850 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 19:15:28.216858 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 19:15:28.216866 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-28 19:15:28.216874 | orchestrator | 2025-05-28 19:15:28.216882 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-28 19:15:28.216898 | orchestrator | Wednesday 28 May 2025 19:14:25 +0000 (0:00:02.932) 0:01:35.210 ********* 2025-05-28 19:15:28.216910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 19:15:28.216918 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 19:15:28.216926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 19:15:28.216934 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 19:15:28.216942 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 19:15:28.216950 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 19:15:28.216958 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-28 19:15:28.216966 | orchestrator | 2025-05-28 19:15:28.216974 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-28 19:15:28.216982 | orchestrator | Wednesday 28 May 2025 19:14:28 +0000 (0:00:02.473) 0:01:37.683 ********* 2025-05-28 19:15:28.216990 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.216998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.217007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.217015 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.217043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.217052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.217083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217092 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-28 19:15:28.217127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217144 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:15:28.217202 | orchestrator | 2025-05-28 19:15:28.217210 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-28 19:15:28.217218 | orchestrator | Wednesday 28 May 2025 19:14:32 +0000 (0:00:03.739) 0:01:41.423 ********* 2025-05-28 19:15:28.217226 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.217238 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:15:28.217246 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.217254 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.217262 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.217270 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.217278 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.217286 | orchestrator | 2025-05-28 19:15:28.217294 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-28 19:15:28.217302 | orchestrator | Wednesday 28 May 2025 19:14:33 +0000 (0:00:01.954) 0:01:43.378 ********* 2025-05-28 19:15:28.217310 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.217318 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:15:28.217326 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.217334 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.217341 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.217349 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.217357 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.217365 | orchestrator | 2025-05-28 19:15:28.217373 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 19:15:28.217381 | orchestrator | Wednesday 28 May 2025 19:14:35 +0000 (0:00:01.646) 0:01:45.024 ********* 2025-05-28 19:15:28.217389 | orchestrator | 2025-05-28 19:15:28.217397 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 19:15:28.217405 | orchestrator | Wednesday 28 May 2025 19:14:35 +0000 (0:00:00.067) 0:01:45.092 ********* 2025-05-28 19:15:28.217413 | orchestrator | 2025-05-28 19:15:28.217421 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 19:15:28.217429 | orchestrator | Wednesday 28 May 2025 19:14:35 +0000 (0:00:00.060) 0:01:45.152 ********* 2025-05-28 19:15:28.217437 | orchestrator | 2025-05-28 19:15:28.217445 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 19:15:28.217453 | orchestrator | Wednesday 28 May 2025 19:14:35 +0000 (0:00:00.057) 0:01:45.209 ********* 2025-05-28 19:15:28.217461 | orchestrator | 2025-05-28 19:15:28.217468 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 19:15:28.217476 | orchestrator | Wednesday 28 May 2025 19:14:36 +0000 (0:00:00.272) 0:01:45.481 ********* 2025-05-28 19:15:28.217484 | orchestrator | 2025-05-28 19:15:28.217492 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 19:15:28.217500 | orchestrator | Wednesday 28 May 2025 19:14:36 +0000 (0:00:00.060) 0:01:45.542 ********* 2025-05-28 19:15:28.217508 | orchestrator | 2025-05-28 19:15:28.217516 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-28 19:15:28.217524 | orchestrator | Wednesday 28 May 2025 19:14:36 +0000 (0:00:00.060) 0:01:45.602 ********* 2025-05-28 19:15:28.217531 | orchestrator | 2025-05-28 19:15:28.217539 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-28 19:15:28.217547 | orchestrator | Wednesday 28 May 2025 19:14:36 +0000 (0:00:00.082) 0:01:45.685 ********* 2025-05-28 19:15:28.217559 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.217567 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:15:28.217575 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.217583 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.217591 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.217602 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.217610 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.217618 | orchestrator | 2025-05-28 19:15:28.217637 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-28 19:15:28.217645 | orchestrator | Wednesday 28 May 2025 19:14:45 +0000 (0:00:09.497) 0:01:55.182 ********* 2025-05-28 19:15:28.217653 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:15:28.217661 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.217669 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.217677 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.217685 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.217692 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.217700 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.217708 | orchestrator | 2025-05-28 19:15:28.217716 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-28 19:15:28.217724 | orchestrator | Wednesday 28 May 2025 19:15:14 +0000 (0:00:28.889) 0:02:24.071 ********* 2025-05-28 19:15:28.217732 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:15:28.217740 | orchestrator | ok: [testbed-manager] 2025-05-28 19:15:28.217748 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:15:28.217756 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:15:28.217764 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:15:28.217772 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:15:28.217780 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:15:28.217788 | orchestrator | 2025-05-28 19:15:28.217796 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-28 19:15:28.217804 | orchestrator | Wednesday 28 May 2025 19:15:17 +0000 (0:00:02.703) 0:02:26.775 ********* 2025-05-28 19:15:28.217812 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:15:28.217820 | orchestrator | changed: [testbed-manager] 2025-05-28 19:15:28.217828 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:15:28.217835 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:15:28.217843 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:15:28.217851 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:15:28.217859 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:15:28.217867 | orchestrator | 2025-05-28 19:15:28.217875 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:15:28.217883 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:15:28.217892 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:15:28.217900 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:15:28.217913 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:15:28.217921 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:15:28.217929 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:15:28.217937 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:15:28.217945 | orchestrator | 2025-05-28 19:15:28.217957 | orchestrator | 2025-05-28 19:15:28.217965 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:15:28.217973 | orchestrator | Wednesday 28 May 2025 19:15:27 +0000 (0:00:09.800) 0:02:36.576 ********* 2025-05-28 19:15:28.217981 | orchestrator | =============================================================================== 2025-05-28 19:15:28.217989 | orchestrator | common : Ensure fluentd image is present for label check --------------- 37.12s 2025-05-28 19:15:28.217997 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.89s 2025-05-28 19:15:28.218005 | orchestrator | common : Restart cron container ----------------------------------------- 9.80s 2025-05-28 19:15:28.218013 | orchestrator | common : Restart fluentd container -------------------------------------- 9.50s 2025-05-28 19:15:28.218052 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.66s 2025-05-28 19:15:28.218061 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.76s 2025-05-28 19:15:28.218069 | orchestrator | common : Copying over config.json files for services -------------------- 5.46s 2025-05-28 19:15:28.218077 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.99s 2025-05-28 19:15:28.218085 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 4.05s 2025-05-28 19:15:28.218093 | orchestrator | common : Check common containers ---------------------------------------- 3.74s 2025-05-28 19:15:28.218101 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.69s 2025-05-28 19:15:28.218109 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.24s 2025-05-28 19:15:28.218117 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.93s 2025-05-28 19:15:28.218125 | orchestrator | common : include_tasks -------------------------------------------------- 2.77s 2025-05-28 19:15:28.218133 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.70s 2025-05-28 19:15:28.218141 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.48s 2025-05-28 19:15:28.218153 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.47s 2025-05-28 19:15:28.218161 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.43s 2025-05-28 19:15:28.218169 | orchestrator | common : Creating log volume -------------------------------------------- 1.95s 2025-05-28 19:15:28.218177 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.94s 2025-05-28 19:15:28.218189 | orchestrator | 2025-05-28 19:15:28 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:28.218359 | orchestrator | 2025-05-28 19:15:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:28.218372 | orchestrator | 2025-05-28 19:15:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:31.276715 | orchestrator | 2025-05-28 19:15:31 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:31.279365 | orchestrator | 2025-05-28 19:15:31 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:31.281178 | orchestrator | 2025-05-28 19:15:31 | INFO  | Task 887b872f-2745-47bd-8a1c-edbb06ca6741 is in state STARTED 2025-05-28 19:15:31.283077 | orchestrator | 2025-05-28 19:15:31 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:31.284732 | orchestrator | 2025-05-28 19:15:31 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:31.286231 | orchestrator | 2025-05-28 19:15:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:31.286670 | orchestrator | 2025-05-28 19:15:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:34.320469 | orchestrator | 2025-05-28 19:15:34 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:34.320611 | orchestrator | 2025-05-28 19:15:34 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:34.323355 | orchestrator | 2025-05-28 19:15:34 | INFO  | Task 887b872f-2745-47bd-8a1c-edbb06ca6741 is in state STARTED 2025-05-28 19:15:34.323393 | orchestrator | 2025-05-28 19:15:34 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:34.323405 | orchestrator | 2025-05-28 19:15:34 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:34.323416 | orchestrator | 2025-05-28 19:15:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:34.323427 | orchestrator | 2025-05-28 19:15:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:37.359470 | orchestrator | 2025-05-28 19:15:37 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:37.359585 | orchestrator | 2025-05-28 19:15:37 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:37.359985 | orchestrator | 2025-05-28 19:15:37 | INFO  | Task 887b872f-2745-47bd-8a1c-edbb06ca6741 is in state STARTED 2025-05-28 19:15:37.360870 | orchestrator | 2025-05-28 19:15:37 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:37.362128 | orchestrator | 2025-05-28 19:15:37 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:37.362240 | orchestrator | 2025-05-28 19:15:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:37.362257 | orchestrator | 2025-05-28 19:15:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:40.422176 | orchestrator | 2025-05-28 19:15:40 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:40.422389 | orchestrator | 2025-05-28 19:15:40 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:40.423536 | orchestrator | 2025-05-28 19:15:40 | INFO  | Task 887b872f-2745-47bd-8a1c-edbb06ca6741 is in state STARTED 2025-05-28 19:15:40.424976 | orchestrator | 2025-05-28 19:15:40 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:40.426608 | orchestrator | 2025-05-28 19:15:40 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:40.434583 | orchestrator | 2025-05-28 19:15:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:40.435992 | orchestrator | 2025-05-28 19:15:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:43.488062 | orchestrator | 2025-05-28 19:15:43 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:43.489435 | orchestrator | 2025-05-28 19:15:43 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:43.489753 | orchestrator | 2025-05-28 19:15:43 | INFO  | Task 887b872f-2745-47bd-8a1c-edbb06ca6741 is in state STARTED 2025-05-28 19:15:43.492683 | orchestrator | 2025-05-28 19:15:43 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:43.494554 | orchestrator | 2025-05-28 19:15:43 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:43.495807 | orchestrator | 2025-05-28 19:15:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:43.495859 | orchestrator | 2025-05-28 19:15:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:46.544842 | orchestrator | 2025-05-28 19:15:46 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:46.546914 | orchestrator | 2025-05-28 19:15:46 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:46.551318 | orchestrator | 2025-05-28 19:15:46 | INFO  | Task 887b872f-2745-47bd-8a1c-edbb06ca6741 is in state STARTED 2025-05-28 19:15:46.553411 | orchestrator | 2025-05-28 19:15:46 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:46.555646 | orchestrator | 2025-05-28 19:15:46 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:46.558347 | orchestrator | 2025-05-28 19:15:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:46.558856 | orchestrator | 2025-05-28 19:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:49.617563 | orchestrator | 2025-05-28 19:15:49 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:49.618850 | orchestrator | 2025-05-28 19:15:49 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:49.622165 | orchestrator | 2025-05-28 19:15:49 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:15:49.622943 | orchestrator | 2025-05-28 19:15:49 | INFO  | Task 887b872f-2745-47bd-8a1c-edbb06ca6741 is in state SUCCESS 2025-05-28 19:15:49.623962 | orchestrator | 2025-05-28 19:15:49 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:49.624992 | orchestrator | 2025-05-28 19:15:49 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:49.625118 | orchestrator | 2025-05-28 19:15:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:49.625441 | orchestrator | 2025-05-28 19:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:52.673819 | orchestrator | 2025-05-28 19:15:52 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:52.675860 | orchestrator | 2025-05-28 19:15:52 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:52.677653 | orchestrator | 2025-05-28 19:15:52 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:15:52.679498 | orchestrator | 2025-05-28 19:15:52 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:52.681894 | orchestrator | 2025-05-28 19:15:52 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:52.682910 | orchestrator | 2025-05-28 19:15:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:52.682943 | orchestrator | 2025-05-28 19:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:55.720341 | orchestrator | 2025-05-28 19:15:55 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:55.720739 | orchestrator | 2025-05-28 19:15:55 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:55.726913 | orchestrator | 2025-05-28 19:15:55 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:15:55.726967 | orchestrator | 2025-05-28 19:15:55 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:55.726976 | orchestrator | 2025-05-28 19:15:55 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:55.726984 | orchestrator | 2025-05-28 19:15:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:55.726993 | orchestrator | 2025-05-28 19:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:15:58.758792 | orchestrator | 2025-05-28 19:15:58 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:15:58.761015 | orchestrator | 2025-05-28 19:15:58 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:15:58.762626 | orchestrator | 2025-05-28 19:15:58 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:15:58.763545 | orchestrator | 2025-05-28 19:15:58 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:15:58.766691 | orchestrator | 2025-05-28 19:15:58 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:15:58.766785 | orchestrator | 2025-05-28 19:15:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:15:58.766805 | orchestrator | 2025-05-28 19:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:01.829143 | orchestrator | 2025-05-28 19:16:01 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:01.829692 | orchestrator | 2025-05-28 19:16:01 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:01.831958 | orchestrator | 2025-05-28 19:16:01 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:01.836732 | orchestrator | 2025-05-28 19:16:01 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state STARTED 2025-05-28 19:16:01.837859 | orchestrator | 2025-05-28 19:16:01 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:01.841624 | orchestrator | 2025-05-28 19:16:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:01.841662 | orchestrator | 2025-05-28 19:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:04.881780 | orchestrator | 2025-05-28 19:16:04 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:04.884212 | orchestrator | 2025-05-28 19:16:04 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:04.887490 | orchestrator | 2025-05-28 19:16:04 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:04.889023 | orchestrator | 2025-05-28 19:16:04 | INFO  | Task 86450a71-99df-4873-8650-a544799767d6 is in state SUCCESS 2025-05-28 19:16:04.891675 | orchestrator | 2025-05-28 19:16:04.891705 | orchestrator | 2025-05-28 19:16:04.891717 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:16:04.891729 | orchestrator | 2025-05-28 19:16:04.891740 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:16:04.891752 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-05-28 19:16:04.891763 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:16:04.891775 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:16:04.891786 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:16:04.891797 | orchestrator | 2025-05-28 19:16:04.891809 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:16:04.891820 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.324) 0:00:00.601 ********* 2025-05-28 19:16:04.891832 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-28 19:16:04.891843 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-28 19:16:04.891854 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-28 19:16:04.891865 | orchestrator | 2025-05-28 19:16:04.891876 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-28 19:16:04.891888 | orchestrator | 2025-05-28 19:16:04.891899 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-28 19:16:04.891910 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.286) 0:00:00.888 ********* 2025-05-28 19:16:04.891946 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:16:04.891958 | orchestrator | 2025-05-28 19:16:04.891969 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-28 19:16:04.891980 | orchestrator | Wednesday 28 May 2025 19:15:33 +0000 (0:00:00.721) 0:00:01.610 ********* 2025-05-28 19:16:04.891991 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-28 19:16:04.892002 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-28 19:16:04.892013 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-28 19:16:04.892024 | orchestrator | 2025-05-28 19:16:04.892067 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-28 19:16:04.892080 | orchestrator | Wednesday 28 May 2025 19:15:34 +0000 (0:00:00.797) 0:00:02.407 ********* 2025-05-28 19:16:04.892091 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-28 19:16:04.892102 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-28 19:16:04.892113 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-28 19:16:04.892124 | orchestrator | 2025-05-28 19:16:04.892135 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-28 19:16:04.892146 | orchestrator | Wednesday 28 May 2025 19:15:36 +0000 (0:00:02.169) 0:00:04.577 ********* 2025-05-28 19:16:04.892156 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:04.892168 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:04.892191 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:04.892212 | orchestrator | 2025-05-28 19:16:04.892224 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-28 19:16:04.892248 | orchestrator | Wednesday 28 May 2025 19:15:38 +0000 (0:00:02.264) 0:00:06.841 ********* 2025-05-28 19:16:04.892259 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:04.892271 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:04.892283 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:04.892295 | orchestrator | 2025-05-28 19:16:04.892308 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:16:04.892322 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:16:04.892336 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:16:04.892348 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:16:04.892361 | orchestrator | 2025-05-28 19:16:04.892373 | orchestrator | 2025-05-28 19:16:04.892386 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:16:04.892398 | orchestrator | Wednesday 28 May 2025 19:15:47 +0000 (0:00:08.172) 0:00:15.013 ********* 2025-05-28 19:16:04.892411 | orchestrator | =============================================================================== 2025-05-28 19:16:04.892423 | orchestrator | memcached : Restart memcached container --------------------------------- 8.17s 2025-05-28 19:16:04.892436 | orchestrator | memcached : Check memcached container ----------------------------------- 2.26s 2025-05-28 19:16:04.892448 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.17s 2025-05-28 19:16:04.892461 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.80s 2025-05-28 19:16:04.892473 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.72s 2025-05-28 19:16:04.892486 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-05-28 19:16:04.892498 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2025-05-28 19:16:04.892511 | orchestrator | 2025-05-28 19:16:04.892523 | orchestrator | 2025-05-28 19:16:04.892536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:16:04.892556 | orchestrator | 2025-05-28 19:16:04.892569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:16:04.892581 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.209) 0:00:00.209 ********* 2025-05-28 19:16:04.892633 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:16:04.892646 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:16:04.892659 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:16:04.892673 | orchestrator | 2025-05-28 19:16:04.892684 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:16:04.892707 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.269) 0:00:00.478 ********* 2025-05-28 19:16:04.892720 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-28 19:16:04.892732 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-28 19:16:04.892743 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-28 19:16:04.892754 | orchestrator | 2025-05-28 19:16:04.892765 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-28 19:16:04.892776 | orchestrator | 2025-05-28 19:16:04.892787 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-28 19:16:04.892798 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.254) 0:00:00.732 ********* 2025-05-28 19:16:04.892810 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:16:04.892821 | orchestrator | 2025-05-28 19:16:04.892832 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-28 19:16:04.892843 | orchestrator | Wednesday 28 May 2025 19:15:33 +0000 (0:00:00.693) 0:00:01.426 ********* 2025-05-28 19:16:04.892857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.892875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.892892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.892905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.892924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.892945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.892957 | orchestrator | 2025-05-28 19:16:04.892969 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-28 19:16:04.892980 | orchestrator | Wednesday 28 May 2025 19:15:35 +0000 (0:00:01.637) 0:00:03.063 ********* 2025-05-28 19:16:04.892992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893090 | orchestrator | 2025-05-28 19:16:04.893102 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-28 19:16:04.893113 | orchestrator | Wednesday 28 May 2025 19:15:37 +0000 (0:00:02.691) 0:00:05.755 ********* 2025-05-28 19:16:04.893125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893215 | orchestrator | 2025-05-28 19:16:04.893226 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-28 19:16:04.893237 | orchestrator | Wednesday 28 May 2025 19:15:40 +0000 (0:00:03.211) 0:00:08.967 ********* 2025-05-28 19:16:04.893249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-28 19:16:04.893335 | orchestrator | 2025-05-28 19:16:04.893346 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-28 19:16:04.893358 | orchestrator | Wednesday 28 May 2025 19:15:43 +0000 (0:00:02.595) 0:00:11.563 ********* 2025-05-28 19:16:04.893369 | orchestrator | 2025-05-28 19:16:04.893380 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-28 19:16:04.893391 | orchestrator | Wednesday 28 May 2025 19:15:43 +0000 (0:00:00.067) 0:00:11.631 ********* 2025-05-28 19:16:04.893402 | orchestrator | 2025-05-28 19:16:04.893413 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-28 19:16:04.893424 | orchestrator | Wednesday 28 May 2025 19:15:43 +0000 (0:00:00.056) 0:00:11.687 ********* 2025-05-28 19:16:04.893435 | orchestrator | 2025-05-28 19:16:04.893446 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-28 19:16:04.893457 | orchestrator | Wednesday 28 May 2025 19:15:43 +0000 (0:00:00.061) 0:00:11.749 ********* 2025-05-28 19:16:04.893469 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:04.893480 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:04.893491 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:04.893502 | orchestrator | 2025-05-28 19:16:04.893513 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-28 19:16:04.893524 | orchestrator | Wednesday 28 May 2025 19:15:53 +0000 (0:00:10.062) 0:00:21.811 ********* 2025-05-28 19:16:04.893535 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:04.893546 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:04.893558 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:04.893569 | orchestrator | 2025-05-28 19:16:04.893580 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:16:04.893607 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:16:04.893625 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:16:04.893636 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:16:04.893647 | orchestrator | 2025-05-28 19:16:04.893658 | orchestrator | 2025-05-28 19:16:04.893669 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:16:04.893680 | orchestrator | Wednesday 28 May 2025 19:16:03 +0000 (0:00:10.033) 0:00:31.845 ********* 2025-05-28 19:16:04.893691 | orchestrator | =============================================================================== 2025-05-28 19:16:04.893702 | orchestrator | redis : Restart redis container ---------------------------------------- 10.06s 2025-05-28 19:16:04.893713 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.03s 2025-05-28 19:16:04.893728 | orchestrator | redis : Copying over redis config files --------------------------------- 3.21s 2025-05-28 19:16:04.893739 | orchestrator | redis : Copying over default config.json files -------------------------- 2.69s 2025-05-28 19:16:04.893750 | orchestrator | redis : Check redis containers ------------------------------------------ 2.60s 2025-05-28 19:16:04.893761 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.64s 2025-05-28 19:16:04.893772 | orchestrator | redis : include_tasks --------------------------------------------------- 0.69s 2025-05-28 19:16:04.893783 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-05-28 19:16:04.893794 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.25s 2025-05-28 19:16:04.893805 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2025-05-28 19:16:04.893895 | orchestrator | 2025-05-28 19:16:04 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:04.894374 | orchestrator | 2025-05-28 19:16:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:04.894677 | orchestrator | 2025-05-28 19:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:07.926414 | orchestrator | 2025-05-28 19:16:07 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:07.927553 | orchestrator | 2025-05-28 19:16:07 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:07.929058 | orchestrator | 2025-05-28 19:16:07 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:07.930718 | orchestrator | 2025-05-28 19:16:07 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:07.932028 | orchestrator | 2025-05-28 19:16:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:07.932051 | orchestrator | 2025-05-28 19:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:10.979244 | orchestrator | 2025-05-28 19:16:10 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:10.982318 | orchestrator | 2025-05-28 19:16:10 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:10.982379 | orchestrator | 2025-05-28 19:16:10 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:10.984484 | orchestrator | 2025-05-28 19:16:10 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:10.985710 | orchestrator | 2025-05-28 19:16:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:10.985780 | orchestrator | 2025-05-28 19:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:14.041253 | orchestrator | 2025-05-28 19:16:14 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:14.042172 | orchestrator | 2025-05-28 19:16:14 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:14.043039 | orchestrator | 2025-05-28 19:16:14 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:14.043971 | orchestrator | 2025-05-28 19:16:14 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:14.044869 | orchestrator | 2025-05-28 19:16:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:14.045143 | orchestrator | 2025-05-28 19:16:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:17.094057 | orchestrator | 2025-05-28 19:16:17 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:17.094168 | orchestrator | 2025-05-28 19:16:17 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:17.096571 | orchestrator | 2025-05-28 19:16:17 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:17.097203 | orchestrator | 2025-05-28 19:16:17 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:17.098754 | orchestrator | 2025-05-28 19:16:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:17.099980 | orchestrator | 2025-05-28 19:16:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:20.129123 | orchestrator | 2025-05-28 19:16:20 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:20.130805 | orchestrator | 2025-05-28 19:16:20 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:20.131660 | orchestrator | 2025-05-28 19:16:20 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:20.133510 | orchestrator | 2025-05-28 19:16:20 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:20.133992 | orchestrator | 2025-05-28 19:16:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:20.134062 | orchestrator | 2025-05-28 19:16:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:23.174237 | orchestrator | 2025-05-28 19:16:23 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:23.175432 | orchestrator | 2025-05-28 19:16:23 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:23.180297 | orchestrator | 2025-05-28 19:16:23 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:23.181908 | orchestrator | 2025-05-28 19:16:23 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:23.185844 | orchestrator | 2025-05-28 19:16:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:23.185889 | orchestrator | 2025-05-28 19:16:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:26.246841 | orchestrator | 2025-05-28 19:16:26 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:26.247607 | orchestrator | 2025-05-28 19:16:26 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:26.248737 | orchestrator | 2025-05-28 19:16:26 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:26.249620 | orchestrator | 2025-05-28 19:16:26 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:26.250433 | orchestrator | 2025-05-28 19:16:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:26.250683 | orchestrator | 2025-05-28 19:16:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:29.299040 | orchestrator | 2025-05-28 19:16:29 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:29.300486 | orchestrator | 2025-05-28 19:16:29 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:29.306412 | orchestrator | 2025-05-28 19:16:29 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:29.306473 | orchestrator | 2025-05-28 19:16:29 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:29.306687 | orchestrator | 2025-05-28 19:16:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:29.306715 | orchestrator | 2025-05-28 19:16:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:32.356136 | orchestrator | 2025-05-28 19:16:32 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:32.356310 | orchestrator | 2025-05-28 19:16:32 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:32.359682 | orchestrator | 2025-05-28 19:16:32 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:32.359710 | orchestrator | 2025-05-28 19:16:32 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:32.362894 | orchestrator | 2025-05-28 19:16:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:32.362962 | orchestrator | 2025-05-28 19:16:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:35.409166 | orchestrator | 2025-05-28 19:16:35 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:35.409964 | orchestrator | 2025-05-28 19:16:35 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:35.411147 | orchestrator | 2025-05-28 19:16:35 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:35.412885 | orchestrator | 2025-05-28 19:16:35 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:35.414152 | orchestrator | 2025-05-28 19:16:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:35.414234 | orchestrator | 2025-05-28 19:16:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:38.458182 | orchestrator | 2025-05-28 19:16:38 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:38.458734 | orchestrator | 2025-05-28 19:16:38 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:38.459983 | orchestrator | 2025-05-28 19:16:38 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:38.460614 | orchestrator | 2025-05-28 19:16:38 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:38.461260 | orchestrator | 2025-05-28 19:16:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:38.461332 | orchestrator | 2025-05-28 19:16:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:41.513401 | orchestrator | 2025-05-28 19:16:41 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:41.514138 | orchestrator | 2025-05-28 19:16:41 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:41.516079 | orchestrator | 2025-05-28 19:16:41 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:41.517257 | orchestrator | 2025-05-28 19:16:41 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:41.518230 | orchestrator | 2025-05-28 19:16:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:41.518269 | orchestrator | 2025-05-28 19:16:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:44.556928 | orchestrator | 2025-05-28 19:16:44 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:44.560165 | orchestrator | 2025-05-28 19:16:44 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:44.560231 | orchestrator | 2025-05-28 19:16:44 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:44.563110 | orchestrator | 2025-05-28 19:16:44 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:44.563735 | orchestrator | 2025-05-28 19:16:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:44.563759 | orchestrator | 2025-05-28 19:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:47.591861 | orchestrator | 2025-05-28 19:16:47 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:47.592934 | orchestrator | 2025-05-28 19:16:47 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:47.593242 | orchestrator | 2025-05-28 19:16:47 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:47.595241 | orchestrator | 2025-05-28 19:16:47 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:47.596343 | orchestrator | 2025-05-28 19:16:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:47.596367 | orchestrator | 2025-05-28 19:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:50.634310 | orchestrator | 2025-05-28 19:16:50 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:50.637037 | orchestrator | 2025-05-28 19:16:50 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state STARTED 2025-05-28 19:16:50.637642 | orchestrator | 2025-05-28 19:16:50 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:50.638382 | orchestrator | 2025-05-28 19:16:50 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:50.639108 | orchestrator | 2025-05-28 19:16:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:50.639128 | orchestrator | 2025-05-28 19:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:53.675678 | orchestrator | 2025-05-28 19:16:53 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:53.678778 | orchestrator | 2025-05-28 19:16:53 | INFO  | Task d24d5606-b455-413d-8909-d466d5eef3f4 is in state SUCCESS 2025-05-28 19:16:53.678817 | orchestrator | 2025-05-28 19:16:53 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:53.678829 | orchestrator | 2025-05-28 19:16:53 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:53.678841 | orchestrator | 2025-05-28 19:16:53 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:16:53.680786 | orchestrator | 2025-05-28 19:16:53.680826 | orchestrator | 2025-05-28 19:16:53.680840 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:16:53.680851 | orchestrator | 2025-05-28 19:16:53.680863 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:16:53.680874 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.303) 0:00:00.303 ********* 2025-05-28 19:16:53.680912 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:16:53.680931 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:16:53.680952 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:16:53.680972 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:16:53.680991 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:16:53.681010 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:16:53.681022 | orchestrator | 2025-05-28 19:16:53.681033 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:16:53.681044 | orchestrator | Wednesday 28 May 2025 19:15:33 +0000 (0:00:00.636) 0:00:00.939 ********* 2025-05-28 19:16:53.681055 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 19:16:53.681066 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 19:16:53.681077 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 19:16:53.681088 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 19:16:53.681098 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 19:16:53.681109 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-28 19:16:53.681120 | orchestrator | 2025-05-28 19:16:53.681131 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-28 19:16:53.681142 | orchestrator | 2025-05-28 19:16:53.681153 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-28 19:16:53.681163 | orchestrator | Wednesday 28 May 2025 19:15:34 +0000 (0:00:00.863) 0:00:01.803 ********* 2025-05-28 19:16:53.681175 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:16:53.681187 | orchestrator | 2025-05-28 19:16:53.681197 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-28 19:16:53.681209 | orchestrator | Wednesday 28 May 2025 19:15:35 +0000 (0:00:01.584) 0:00:03.388 ********* 2025-05-28 19:16:53.681219 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-28 19:16:53.681231 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-28 19:16:53.681242 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-28 19:16:53.681253 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-28 19:16:53.681263 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-28 19:16:53.681274 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-28 19:16:53.681285 | orchestrator | 2025-05-28 19:16:53.681297 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-28 19:16:53.681308 | orchestrator | Wednesday 28 May 2025 19:15:37 +0000 (0:00:01.272) 0:00:04.661 ********* 2025-05-28 19:16:53.681319 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-28 19:16:53.681330 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-28 19:16:53.681343 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-28 19:16:53.681362 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-28 19:16:53.681381 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-28 19:16:53.681399 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-28 19:16:53.681417 | orchestrator | 2025-05-28 19:16:53.681437 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-28 19:16:53.681458 | orchestrator | Wednesday 28 May 2025 19:15:39 +0000 (0:00:02.072) 0:00:06.734 ********* 2025-05-28 19:16:53.681477 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-28 19:16:53.681490 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:16:53.681502 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-28 19:16:53.681514 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:16:53.681564 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-28 19:16:53.681579 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:16:53.681599 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-28 19:16:53.681618 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:16:53.681637 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-28 19:16:53.681658 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:16:53.681674 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-28 19:16:53.681687 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:16:53.681699 | orchestrator | 2025-05-28 19:16:53.681712 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-28 19:16:53.681724 | orchestrator | Wednesday 28 May 2025 19:15:41 +0000 (0:00:01.956) 0:00:08.691 ********* 2025-05-28 19:16:53.681736 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:16:53.681747 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:16:53.681758 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:16:53.681769 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:16:53.681779 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:16:53.681790 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:16:53.681802 | orchestrator | 2025-05-28 19:16:53.681813 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-28 19:16:53.681824 | orchestrator | Wednesday 28 May 2025 19:15:42 +0000 (0:00:01.019) 0:00:09.710 ********* 2025-05-28 19:16:53.681860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.681996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682134 | orchestrator | 2025-05-28 19:16:53.682151 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-28 19:16:53.682163 | orchestrator | Wednesday 28 May 2025 19:15:44 +0000 (0:00:02.608) 0:00:12.318 ********* 2025-05-28 19:16:53.682174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682347 | orchestrator | 2025-05-28 19:16:53.682368 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-28 19:16:53.682387 | orchestrator | Wednesday 28 May 2025 19:15:48 +0000 (0:00:03.641) 0:00:15.960 ********* 2025-05-28 19:16:53.682407 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:16:53.682430 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:16:53.682449 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:16:53.682461 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:53.682472 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:53.682483 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:53.682494 | orchestrator | 2025-05-28 19:16:53.682505 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-28 19:16:53.682523 | orchestrator | Wednesday 28 May 2025 19:15:51 +0000 (0:00:03.030) 0:00:18.991 ********* 2025-05-28 19:16:53.682559 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:16:53.682580 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:16:53.682599 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:16:53.682611 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:53.682622 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:53.682642 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:53.682660 | orchestrator | 2025-05-28 19:16:53.682680 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-28 19:16:53.682698 | orchestrator | Wednesday 28 May 2025 19:15:55 +0000 (0:00:03.963) 0:00:22.954 ********* 2025-05-28 19:16:53.682709 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:16:53.682720 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:16:53.682731 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:16:53.682742 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:16:53.682753 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:16:53.682764 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:16:53.682774 | orchestrator | 2025-05-28 19:16:53.682785 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-28 19:16:53.682796 | orchestrator | Wednesday 28 May 2025 19:15:56 +0000 (0:00:01.317) 0:00:24.272 ********* 2025-05-28 19:16:53.682815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682847 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-28 19:16:53.682999 | orchestrator | 2025-05-28 19:16:53.683010 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 19:16:53.683021 | orchestrator | Wednesday 28 May 2025 19:15:59 +0000 (0:00:02.738) 0:00:27.010 ********* 2025-05-28 19:16:53.683032 | orchestrator | 2025-05-28 19:16:53.683043 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 19:16:53.683054 | orchestrator | Wednesday 28 May 2025 19:15:59 +0000 (0:00:00.234) 0:00:27.245 ********* 2025-05-28 19:16:53.683065 | orchestrator | 2025-05-28 19:16:53.683076 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 19:16:53.683086 | orchestrator | Wednesday 28 May 2025 19:16:00 +0000 (0:00:00.432) 0:00:27.677 ********* 2025-05-28 19:16:53.683097 | orchestrator | 2025-05-28 19:16:53.683108 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 19:16:53.683118 | orchestrator | Wednesday 28 May 2025 19:16:00 +0000 (0:00:00.357) 0:00:28.035 ********* 2025-05-28 19:16:53.683129 | orchestrator | 2025-05-28 19:16:53.683140 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 19:16:53.683151 | orchestrator | Wednesday 28 May 2025 19:16:01 +0000 (0:00:00.694) 0:00:28.730 ********* 2025-05-28 19:16:53.683162 | orchestrator | 2025-05-28 19:16:53.683173 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-28 19:16:53.683183 | orchestrator | Wednesday 28 May 2025 19:16:01 +0000 (0:00:00.227) 0:00:28.957 ********* 2025-05-28 19:16:53.683194 | orchestrator | 2025-05-28 19:16:53.683205 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-28 19:16:53.683216 | orchestrator | Wednesday 28 May 2025 19:16:01 +0000 (0:00:00.329) 0:00:29.287 ********* 2025-05-28 19:16:53.683226 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:16:53.683237 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:53.683255 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:53.683267 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:53.683278 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:16:53.683289 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:16:53.683300 | orchestrator | 2025-05-28 19:16:53.683310 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-28 19:16:53.683322 | orchestrator | Wednesday 28 May 2025 19:16:12 +0000 (0:00:10.667) 0:00:39.954 ********* 2025-05-28 19:16:53.683340 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:16:53.683360 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:16:53.683378 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:16:53.683396 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:16:53.683414 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:16:53.683435 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:16:53.683453 | orchestrator | 2025-05-28 19:16:53.683472 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-28 19:16:53.683488 | orchestrator | Wednesday 28 May 2025 19:16:14 +0000 (0:00:02.295) 0:00:42.250 ********* 2025-05-28 19:16:53.683499 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:53.683510 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:53.683521 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:16:53.683554 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:16:53.683567 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:16:53.683578 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:53.683588 | orchestrator | 2025-05-28 19:16:53.683599 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-28 19:16:53.683610 | orchestrator | Wednesday 28 May 2025 19:16:25 +0000 (0:00:10.798) 0:00:53.048 ********* 2025-05-28 19:16:53.683621 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-28 19:16:53.683632 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-28 19:16:53.683643 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-28 19:16:53.683654 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-28 19:16:53.683673 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-28 19:16:53.683692 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-28 19:16:53.683713 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-28 19:16:53.683733 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-28 19:16:53.683751 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-28 19:16:53.683768 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-28 19:16:53.683779 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-28 19:16:53.683790 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-28 19:16:53.683801 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 19:16:53.683811 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 19:16:53.683822 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 19:16:53.683833 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 19:16:53.683851 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 19:16:53.683862 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-28 19:16:53.683873 | orchestrator | 2025-05-28 19:16:53.683884 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-28 19:16:53.683894 | orchestrator | Wednesday 28 May 2025 19:16:34 +0000 (0:00:08.677) 0:01:01.726 ********* 2025-05-28 19:16:53.683905 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-28 19:16:53.683916 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:16:53.683927 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-28 19:16:53.683938 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:16:53.683949 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-28 19:16:53.683960 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:16:53.683970 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-28 19:16:53.683981 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-28 19:16:53.683992 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-28 19:16:53.684003 | orchestrator | 2025-05-28 19:16:53.684014 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-28 19:16:53.684025 | orchestrator | Wednesday 28 May 2025 19:16:37 +0000 (0:00:03.116) 0:01:04.843 ********* 2025-05-28 19:16:53.684035 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-28 19:16:53.684046 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:16:53.684057 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-28 19:16:53.684068 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:16:53.684079 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-28 19:16:53.684090 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:16:53.684101 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-28 19:16:53.684120 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-28 19:16:53.684131 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-28 19:16:53.684142 | orchestrator | 2025-05-28 19:16:53.684153 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-28 19:16:53.684163 | orchestrator | Wednesday 28 May 2025 19:16:41 +0000 (0:00:04.530) 0:01:09.373 ********* 2025-05-28 19:16:53.684174 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:16:53.684190 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:16:53.684201 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:16:53.684211 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:16:53.684222 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:16:53.684233 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:16:53.684244 | orchestrator | 2025-05-28 19:16:53.684254 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:16:53.684266 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:16:53.684277 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:16:53.684288 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:16:53.684300 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 19:16:53.684310 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 19:16:53.684321 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 19:16:53.684343 | orchestrator | 2025-05-28 19:16:53.684362 | orchestrator | 2025-05-28 19:16:53.684380 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:16:53.684399 | orchestrator | Wednesday 28 May 2025 19:16:50 +0000 (0:00:08.633) 0:01:18.008 ********* 2025-05-28 19:16:53.684417 | orchestrator | =============================================================================== 2025-05-28 19:16:53.684429 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.43s 2025-05-28 19:16:53.684440 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.67s 2025-05-28 19:16:53.684451 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.68s 2025-05-28 19:16:53.684461 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.53s 2025-05-28 19:16:53.684472 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.96s 2025-05-28 19:16:53.684483 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.64s 2025-05-28 19:16:53.684493 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.12s 2025-05-28 19:16:53.684504 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.03s 2025-05-28 19:16:53.684515 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.74s 2025-05-28 19:16:53.684525 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.61s 2025-05-28 19:16:53.684596 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.30s 2025-05-28 19:16:53.684609 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.28s 2025-05-28 19:16:53.684619 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.07s 2025-05-28 19:16:53.684630 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.96s 2025-05-28 19:16:53.684641 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.58s 2025-05-28 19:16:53.684652 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.32s 2025-05-28 19:16:53.684662 | orchestrator | module-load : Load modules ---------------------------------------------- 1.27s 2025-05-28 19:16:53.684673 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.02s 2025-05-28 19:16:53.684684 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2025-05-28 19:16:53.684700 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2025-05-28 19:16:53.684784 | orchestrator | 2025-05-28 19:16:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:53.684807 | orchestrator | 2025-05-28 19:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:56.714635 | orchestrator | 2025-05-28 19:16:56 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:56.715002 | orchestrator | 2025-05-28 19:16:56 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:56.715899 | orchestrator | 2025-05-28 19:16:56 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:56.716652 | orchestrator | 2025-05-28 19:16:56 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:16:56.719402 | orchestrator | 2025-05-28 19:16:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:56.719429 | orchestrator | 2025-05-28 19:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:16:59.769960 | orchestrator | 2025-05-28 19:16:59 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:16:59.771394 | orchestrator | 2025-05-28 19:16:59 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:16:59.771448 | orchestrator | 2025-05-28 19:16:59 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:16:59.771656 | orchestrator | 2025-05-28 19:16:59 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:16:59.771671 | orchestrator | 2025-05-28 19:16:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:16:59.771680 | orchestrator | 2025-05-28 19:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:02.829202 | orchestrator | 2025-05-28 19:17:02 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:02.829486 | orchestrator | 2025-05-28 19:17:02 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:02.832493 | orchestrator | 2025-05-28 19:17:02 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:02.832559 | orchestrator | 2025-05-28 19:17:02 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:02.834048 | orchestrator | 2025-05-28 19:17:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:02.834073 | orchestrator | 2025-05-28 19:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:05.864442 | orchestrator | 2025-05-28 19:17:05 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:05.866902 | orchestrator | 2025-05-28 19:17:05 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:05.866982 | orchestrator | 2025-05-28 19:17:05 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:05.866996 | orchestrator | 2025-05-28 19:17:05 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:05.867337 | orchestrator | 2025-05-28 19:17:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:05.867359 | orchestrator | 2025-05-28 19:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:08.906442 | orchestrator | 2025-05-28 19:17:08 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:08.907313 | orchestrator | 2025-05-28 19:17:08 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:08.908734 | orchestrator | 2025-05-28 19:17:08 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:08.913050 | orchestrator | 2025-05-28 19:17:08 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:08.913712 | orchestrator | 2025-05-28 19:17:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:08.914230 | orchestrator | 2025-05-28 19:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:11.953127 | orchestrator | 2025-05-28 19:17:11 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:11.954952 | orchestrator | 2025-05-28 19:17:11 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:11.955296 | orchestrator | 2025-05-28 19:17:11 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:11.955994 | orchestrator | 2025-05-28 19:17:11 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:11.956754 | orchestrator | 2025-05-28 19:17:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:11.956785 | orchestrator | 2025-05-28 19:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:15.025718 | orchestrator | 2025-05-28 19:17:15 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:15.029174 | orchestrator | 2025-05-28 19:17:15 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:15.029234 | orchestrator | 2025-05-28 19:17:15 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:15.029255 | orchestrator | 2025-05-28 19:17:15 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:15.030984 | orchestrator | 2025-05-28 19:17:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:15.031032 | orchestrator | 2025-05-28 19:17:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:18.074143 | orchestrator | 2025-05-28 19:17:18 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:18.074253 | orchestrator | 2025-05-28 19:17:18 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:18.076003 | orchestrator | 2025-05-28 19:17:18 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:18.077208 | orchestrator | 2025-05-28 19:17:18 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:18.078774 | orchestrator | 2025-05-28 19:17:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:18.078800 | orchestrator | 2025-05-28 19:17:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:21.129993 | orchestrator | 2025-05-28 19:17:21 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:21.130783 | orchestrator | 2025-05-28 19:17:21 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:21.137776 | orchestrator | 2025-05-28 19:17:21 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:21.140042 | orchestrator | 2025-05-28 19:17:21 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:21.142131 | orchestrator | 2025-05-28 19:17:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:21.142165 | orchestrator | 2025-05-28 19:17:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:24.183960 | orchestrator | 2025-05-28 19:17:24 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:24.184734 | orchestrator | 2025-05-28 19:17:24 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:24.187036 | orchestrator | 2025-05-28 19:17:24 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:24.188131 | orchestrator | 2025-05-28 19:17:24 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:24.189180 | orchestrator | 2025-05-28 19:17:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:24.189371 | orchestrator | 2025-05-28 19:17:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:27.237862 | orchestrator | 2025-05-28 19:17:27 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:27.240288 | orchestrator | 2025-05-28 19:17:27 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:27.248970 | orchestrator | 2025-05-28 19:17:27 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:27.250848 | orchestrator | 2025-05-28 19:17:27 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:27.251777 | orchestrator | 2025-05-28 19:17:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:27.251818 | orchestrator | 2025-05-28 19:17:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:30.315480 | orchestrator | 2025-05-28 19:17:30 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:30.315673 | orchestrator | 2025-05-28 19:17:30 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:30.315699 | orchestrator | 2025-05-28 19:17:30 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:30.315719 | orchestrator | 2025-05-28 19:17:30 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:30.315737 | orchestrator | 2025-05-28 19:17:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:30.315756 | orchestrator | 2025-05-28 19:17:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:33.360086 | orchestrator | 2025-05-28 19:17:33 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:33.361186 | orchestrator | 2025-05-28 19:17:33 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:33.362155 | orchestrator | 2025-05-28 19:17:33 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:33.363119 | orchestrator | 2025-05-28 19:17:33 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:33.364033 | orchestrator | 2025-05-28 19:17:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:33.364073 | orchestrator | 2025-05-28 19:17:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:36.407076 | orchestrator | 2025-05-28 19:17:36 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:36.407678 | orchestrator | 2025-05-28 19:17:36 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:36.411472 | orchestrator | 2025-05-28 19:17:36 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:36.412167 | orchestrator | 2025-05-28 19:17:36 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:36.413102 | orchestrator | 2025-05-28 19:17:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:36.413130 | orchestrator | 2025-05-28 19:17:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:39.465759 | orchestrator | 2025-05-28 19:17:39 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:39.467271 | orchestrator | 2025-05-28 19:17:39 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:39.473970 | orchestrator | 2025-05-28 19:17:39 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:39.479005 | orchestrator | 2025-05-28 19:17:39 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:39.481822 | orchestrator | 2025-05-28 19:17:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:39.482649 | orchestrator | 2025-05-28 19:17:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:42.549155 | orchestrator | 2025-05-28 19:17:42 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:42.553032 | orchestrator | 2025-05-28 19:17:42 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:42.557160 | orchestrator | 2025-05-28 19:17:42 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:42.559326 | orchestrator | 2025-05-28 19:17:42 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:42.560313 | orchestrator | 2025-05-28 19:17:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:42.560862 | orchestrator | 2025-05-28 19:17:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:45.621411 | orchestrator | 2025-05-28 19:17:45 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:45.623101 | orchestrator | 2025-05-28 19:17:45 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:45.624263 | orchestrator | 2025-05-28 19:17:45 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:45.625565 | orchestrator | 2025-05-28 19:17:45 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:45.627511 | orchestrator | 2025-05-28 19:17:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:45.627540 | orchestrator | 2025-05-28 19:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:48.675115 | orchestrator | 2025-05-28 19:17:48 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:48.675205 | orchestrator | 2025-05-28 19:17:48 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:48.675220 | orchestrator | 2025-05-28 19:17:48 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:48.675793 | orchestrator | 2025-05-28 19:17:48 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:48.676441 | orchestrator | 2025-05-28 19:17:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:48.677330 | orchestrator | 2025-05-28 19:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:51.725050 | orchestrator | 2025-05-28 19:17:51 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:51.727755 | orchestrator | 2025-05-28 19:17:51 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:51.729179 | orchestrator | 2025-05-28 19:17:51 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:51.732631 | orchestrator | 2025-05-28 19:17:51 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:51.735786 | orchestrator | 2025-05-28 19:17:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:51.735828 | orchestrator | 2025-05-28 19:17:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:54.773755 | orchestrator | 2025-05-28 19:17:54 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:54.779233 | orchestrator | 2025-05-28 19:17:54 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:54.783449 | orchestrator | 2025-05-28 19:17:54 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:54.783987 | orchestrator | 2025-05-28 19:17:54 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:54.790350 | orchestrator | 2025-05-28 19:17:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:54.790975 | orchestrator | 2025-05-28 19:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:17:57.823577 | orchestrator | 2025-05-28 19:17:57 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:17:57.823875 | orchestrator | 2025-05-28 19:17:57 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:17:57.824734 | orchestrator | 2025-05-28 19:17:57 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:17:57.826142 | orchestrator | 2025-05-28 19:17:57 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:17:57.827497 | orchestrator | 2025-05-28 19:17:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:17:57.827526 | orchestrator | 2025-05-28 19:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:00.874389 | orchestrator | 2025-05-28 19:18:00 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:00.874915 | orchestrator | 2025-05-28 19:18:00 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:18:00.878513 | orchestrator | 2025-05-28 19:18:00 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:00.878990 | orchestrator | 2025-05-28 19:18:00 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:00.879766 | orchestrator | 2025-05-28 19:18:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:00.879796 | orchestrator | 2025-05-28 19:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:03.916621 | orchestrator | 2025-05-28 19:18:03 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:03.917113 | orchestrator | 2025-05-28 19:18:03 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:18:03.918701 | orchestrator | 2025-05-28 19:18:03 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:03.921455 | orchestrator | 2025-05-28 19:18:03 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:03.922172 | orchestrator | 2025-05-28 19:18:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:03.922202 | orchestrator | 2025-05-28 19:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:06.959788 | orchestrator | 2025-05-28 19:18:06 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:06.962670 | orchestrator | 2025-05-28 19:18:06 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:18:06.963815 | orchestrator | 2025-05-28 19:18:06 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:06.965916 | orchestrator | 2025-05-28 19:18:06 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:06.966907 | orchestrator | 2025-05-28 19:18:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:06.966932 | orchestrator | 2025-05-28 19:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:09.998983 | orchestrator | 2025-05-28 19:18:09 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:09.999579 | orchestrator | 2025-05-28 19:18:09 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state STARTED 2025-05-28 19:18:10.000429 | orchestrator | 2025-05-28 19:18:09 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:10.001203 | orchestrator | 2025-05-28 19:18:10 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:10.002234 | orchestrator | 2025-05-28 19:18:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:10.002279 | orchestrator | 2025-05-28 19:18:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:13.033615 | orchestrator | 2025-05-28 19:18:13 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:13.033934 | orchestrator | 2025-05-28 19:18:13 | INFO  | Task 8e117471-8708-4a68-984c-98c4c7eb95e5 is in state SUCCESS 2025-05-28 19:18:13.035965 | orchestrator | 2025-05-28 19:18:13.036011 | orchestrator | 2025-05-28 19:18:13.036024 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-28 19:18:13.036036 | orchestrator | 2025-05-28 19:18:13.036048 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-28 19:18:13.036059 | orchestrator | Wednesday 28 May 2025 19:15:54 +0000 (0:00:00.241) 0:00:00.241 ********* 2025-05-28 19:18:13.036071 | orchestrator | ok: [localhost] => { 2025-05-28 19:18:13.036084 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-28 19:18:13.036096 | orchestrator | } 2025-05-28 19:18:13.036107 | orchestrator | 2025-05-28 19:18:13.036118 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-28 19:18:13.036129 | orchestrator | Wednesday 28 May 2025 19:15:54 +0000 (0:00:00.056) 0:00:00.297 ********* 2025-05-28 19:18:13.036142 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-28 19:18:13.036154 | orchestrator | ...ignoring 2025-05-28 19:18:13.036165 | orchestrator | 2025-05-28 19:18:13.036176 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-28 19:18:13.036187 | orchestrator | Wednesday 28 May 2025 19:15:57 +0000 (0:00:03.114) 0:00:03.412 ********* 2025-05-28 19:18:13.036248 | orchestrator | skipping: [localhost] 2025-05-28 19:18:13.036260 | orchestrator | 2025-05-28 19:18:13.036271 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-28 19:18:13.036282 | orchestrator | Wednesday 28 May 2025 19:15:58 +0000 (0:00:00.084) 0:00:03.496 ********* 2025-05-28 19:18:13.036293 | orchestrator | ok: [localhost] 2025-05-28 19:18:13.036304 | orchestrator | 2025-05-28 19:18:13.036315 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:18:13.036326 | orchestrator | 2025-05-28 19:18:13.036337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:18:13.036347 | orchestrator | Wednesday 28 May 2025 19:15:58 +0000 (0:00:00.152) 0:00:03.649 ********* 2025-05-28 19:18:13.036358 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:18:13.036369 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:18:13.036380 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:18:13.036392 | orchestrator | 2025-05-28 19:18:13.036403 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:18:13.036414 | orchestrator | Wednesday 28 May 2025 19:15:58 +0000 (0:00:00.787) 0:00:04.436 ********* 2025-05-28 19:18:13.036425 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-28 19:18:13.036437 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-28 19:18:13.036448 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-28 19:18:13.036509 | orchestrator | 2025-05-28 19:18:13.036533 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-28 19:18:13.036551 | orchestrator | 2025-05-28 19:18:13.036570 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-28 19:18:13.036590 | orchestrator | Wednesday 28 May 2025 19:15:59 +0000 (0:00:00.936) 0:00:05.373 ********* 2025-05-28 19:18:13.036609 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:18:13.036627 | orchestrator | 2025-05-28 19:18:13.036646 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-28 19:18:13.036667 | orchestrator | Wednesday 28 May 2025 19:16:01 +0000 (0:00:01.541) 0:00:06.916 ********* 2025-05-28 19:18:13.036718 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:18:13.036731 | orchestrator | 2025-05-28 19:18:13.036742 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-28 19:18:13.036753 | orchestrator | Wednesday 28 May 2025 19:16:03 +0000 (0:00:02.370) 0:00:09.287 ********* 2025-05-28 19:18:13.036764 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.036776 | orchestrator | 2025-05-28 19:18:13.036787 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-28 19:18:13.036798 | orchestrator | Wednesday 28 May 2025 19:16:04 +0000 (0:00:00.844) 0:00:10.131 ********* 2025-05-28 19:18:13.036809 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.036819 | orchestrator | 2025-05-28 19:18:13.036830 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-28 19:18:13.036841 | orchestrator | Wednesday 28 May 2025 19:16:05 +0000 (0:00:00.771) 0:00:10.903 ********* 2025-05-28 19:18:13.036852 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.036862 | orchestrator | 2025-05-28 19:18:13.036873 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-28 19:18:13.036884 | orchestrator | Wednesday 28 May 2025 19:16:05 +0000 (0:00:00.335) 0:00:11.238 ********* 2025-05-28 19:18:13.036895 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.036907 | orchestrator | 2025-05-28 19:18:13.036918 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-28 19:18:13.036929 | orchestrator | Wednesday 28 May 2025 19:16:06 +0000 (0:00:00.274) 0:00:11.513 ********* 2025-05-28 19:18:13.036940 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:18:13.036951 | orchestrator | 2025-05-28 19:18:13.036962 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-28 19:18:13.037024 | orchestrator | Wednesday 28 May 2025 19:16:06 +0000 (0:00:00.739) 0:00:12.252 ********* 2025-05-28 19:18:13.037038 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:18:13.037049 | orchestrator | 2025-05-28 19:18:13.037060 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-28 19:18:13.037071 | orchestrator | Wednesday 28 May 2025 19:16:07 +0000 (0:00:00.790) 0:00:13.043 ********* 2025-05-28 19:18:13.037082 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.037093 | orchestrator | 2025-05-28 19:18:13.037104 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-28 19:18:13.037115 | orchestrator | Wednesday 28 May 2025 19:16:07 +0000 (0:00:00.306) 0:00:13.349 ********* 2025-05-28 19:18:13.037126 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.037137 | orchestrator | 2025-05-28 19:18:13.037161 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-28 19:18:13.037172 | orchestrator | Wednesday 28 May 2025 19:16:08 +0000 (0:00:00.307) 0:00:13.656 ********* 2025-05-28 19:18:13.037190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.037218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.037232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.037244 | orchestrator | 2025-05-28 19:18:13.037256 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-28 19:18:13.037267 | orchestrator | Wednesday 28 May 2025 19:16:09 +0000 (0:00:01.034) 0:00:14.691 ********* 2025-05-28 19:18:13.037296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.037310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.037331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.037343 | orchestrator | 2025-05-28 19:18:13.037355 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-28 19:18:13.037366 | orchestrator | Wednesday 28 May 2025 19:16:11 +0000 (0:00:01.867) 0:00:16.558 ********* 2025-05-28 19:18:13.037377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-28 19:18:13.037388 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-28 19:18:13.037400 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-28 19:18:13.037410 | orchestrator | 2025-05-28 19:18:13.037422 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-28 19:18:13.037433 | orchestrator | Wednesday 28 May 2025 19:16:12 +0000 (0:00:01.730) 0:00:18.289 ********* 2025-05-28 19:18:13.037443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-28 19:18:13.037509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-28 19:18:13.037523 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-28 19:18:13.037533 | orchestrator | 2025-05-28 19:18:13.037544 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-28 19:18:13.037555 | orchestrator | Wednesday 28 May 2025 19:16:15 +0000 (0:00:02.943) 0:00:21.232 ********* 2025-05-28 19:18:13.037566 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-28 19:18:13.037577 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-28 19:18:13.037588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-28 19:18:13.037599 | orchestrator | 2025-05-28 19:18:13.037617 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-28 19:18:13.037628 | orchestrator | Wednesday 28 May 2025 19:16:19 +0000 (0:00:03.238) 0:00:24.470 ********* 2025-05-28 19:18:13.037638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-28 19:18:13.037648 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-28 19:18:13.037658 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-28 19:18:13.037680 | orchestrator | 2025-05-28 19:18:13.037695 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-28 19:18:13.037705 | orchestrator | Wednesday 28 May 2025 19:16:21 +0000 (0:00:01.993) 0:00:26.464 ********* 2025-05-28 19:18:13.037715 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-28 19:18:13.039929 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-28 19:18:13.039976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-28 19:18:13.039987 | orchestrator | 2025-05-28 19:18:13.039998 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-28 19:18:13.040009 | orchestrator | Wednesday 28 May 2025 19:16:22 +0000 (0:00:01.836) 0:00:28.300 ********* 2025-05-28 19:18:13.040019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-28 19:18:13.040028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-28 19:18:13.040038 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-28 19:18:13.040048 | orchestrator | 2025-05-28 19:18:13.040058 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-28 19:18:13.040068 | orchestrator | Wednesday 28 May 2025 19:16:24 +0000 (0:00:02.017) 0:00:30.318 ********* 2025-05-28 19:18:13.040077 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.040088 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:18:13.040098 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:18:13.040108 | orchestrator | 2025-05-28 19:18:13.040118 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-28 19:18:13.040127 | orchestrator | Wednesday 28 May 2025 19:16:25 +0000 (0:00:00.966) 0:00:31.284 ********* 2025-05-28 19:18:13.040140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.040152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.040196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:18:13.040208 | orchestrator | 2025-05-28 19:18:13.040218 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-28 19:18:13.040229 | orchestrator | Wednesday 28 May 2025 19:16:28 +0000 (0:00:02.716) 0:00:34.000 ********* 2025-05-28 19:18:13.040238 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:18:13.040248 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:18:13.040258 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:18:13.040268 | orchestrator | 2025-05-28 19:18:13.040278 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-28 19:18:13.040288 | orchestrator | Wednesday 28 May 2025 19:16:30 +0000 (0:00:01.505) 0:00:35.506 ********* 2025-05-28 19:18:13.040297 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:18:13.040307 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:18:13.040317 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:18:13.040327 | orchestrator | 2025-05-28 19:18:13.040336 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-28 19:18:13.040346 | orchestrator | Wednesday 28 May 2025 19:16:36 +0000 (0:00:06.146) 0:00:41.652 ********* 2025-05-28 19:18:13.040356 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:18:13.040366 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:18:13.040375 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:18:13.040385 | orchestrator | 2025-05-28 19:18:13.040395 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-28 19:18:13.040405 | orchestrator | 2025-05-28 19:18:13.040414 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-28 19:18:13.040424 | orchestrator | Wednesday 28 May 2025 19:16:36 +0000 (0:00:00.312) 0:00:41.965 ********* 2025-05-28 19:18:13.040434 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:18:13.040445 | orchestrator | 2025-05-28 19:18:13.040454 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-28 19:18:13.040525 | orchestrator | Wednesday 28 May 2025 19:16:37 +0000 (0:00:00.591) 0:00:42.556 ********* 2025-05-28 19:18:13.040535 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:18:13.040545 | orchestrator | 2025-05-28 19:18:13.040555 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-28 19:18:13.040565 | orchestrator | Wednesday 28 May 2025 19:16:37 +0000 (0:00:00.582) 0:00:43.139 ********* 2025-05-28 19:18:13.040574 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:18:13.040583 | orchestrator | 2025-05-28 19:18:13.040591 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-28 19:18:13.040599 | orchestrator | Wednesday 28 May 2025 19:16:39 +0000 (0:00:01.871) 0:00:45.011 ********* 2025-05-28 19:18:13.040607 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:18:13.040615 | orchestrator | 2025-05-28 19:18:13.040623 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-28 19:18:13.040639 | orchestrator | 2025-05-28 19:18:13.040647 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-28 19:18:13.040655 | orchestrator | Wednesday 28 May 2025 19:17:32 +0000 (0:00:53.406) 0:01:38.417 ********* 2025-05-28 19:18:13.040663 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:18:13.040671 | orchestrator | 2025-05-28 19:18:13.040679 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-28 19:18:13.040687 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:00.881) 0:01:39.299 ********* 2025-05-28 19:18:13.040695 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:18:13.040703 | orchestrator | 2025-05-28 19:18:13.040711 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-28 19:18:13.040719 | orchestrator | Wednesday 28 May 2025 19:17:34 +0000 (0:00:00.364) 0:01:39.664 ********* 2025-05-28 19:18:13.040727 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:18:13.040735 | orchestrator | 2025-05-28 19:18:13.040743 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-28 19:18:13.040751 | orchestrator | Wednesday 28 May 2025 19:17:36 +0000 (0:00:02.118) 0:01:41.783 ********* 2025-05-28 19:18:13.040759 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:18:13.040767 | orchestrator | 2025-05-28 19:18:13.040775 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-28 19:18:13.040783 | orchestrator | 2025-05-28 19:18:13.040791 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-28 19:18:13.040799 | orchestrator | Wednesday 28 May 2025 19:17:51 +0000 (0:00:14.844) 0:01:56.627 ********* 2025-05-28 19:18:13.040807 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:18:13.040815 | orchestrator | 2025-05-28 19:18:13.040823 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-28 19:18:13.040832 | orchestrator | Wednesday 28 May 2025 19:17:51 +0000 (0:00:00.662) 0:01:57.290 ********* 2025-05-28 19:18:13.040840 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:18:13.040848 | orchestrator | 2025-05-28 19:18:13.040856 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-28 19:18:13.040871 | orchestrator | Wednesday 28 May 2025 19:17:52 +0000 (0:00:00.248) 0:01:57.539 ********* 2025-05-28 19:18:13.040879 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:18:13.040887 | orchestrator | 2025-05-28 19:18:13.040895 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-28 19:18:13.040903 | orchestrator | Wednesday 28 May 2025 19:17:53 +0000 (0:00:01.735) 0:01:59.275 ********* 2025-05-28 19:18:13.040911 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:18:13.040920 | orchestrator | 2025-05-28 19:18:13.040928 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-28 19:18:13.040936 | orchestrator | 2025-05-28 19:18:13.040944 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-28 19:18:13.040952 | orchestrator | Wednesday 28 May 2025 19:18:08 +0000 (0:00:14.506) 0:02:13.781 ********* 2025-05-28 19:18:13.040960 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:18:13.040968 | orchestrator | 2025-05-28 19:18:13.040976 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-28 19:18:13.040989 | orchestrator | Wednesday 28 May 2025 19:18:09 +0000 (0:00:01.228) 0:02:15.010 ********* 2025-05-28 19:18:13.041006 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-28 19:18:13.041015 | orchestrator | enable_outward_rabbitmq_True 2025-05-28 19:18:13.041023 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-28 19:18:13.041031 | orchestrator | outward_rabbitmq_restart 2025-05-28 19:18:13.041039 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:18:13.041047 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:18:13.041055 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:18:13.041063 | orchestrator | 2025-05-28 19:18:13.041071 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-28 19:18:13.041085 | orchestrator | skipping: no hosts matched 2025-05-28 19:18:13.041093 | orchestrator | 2025-05-28 19:18:13.041101 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-28 19:18:13.041109 | orchestrator | skipping: no hosts matched 2025-05-28 19:18:13.041117 | orchestrator | 2025-05-28 19:18:13.041125 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-28 19:18:13.041133 | orchestrator | skipping: no hosts matched 2025-05-28 19:18:13.041141 | orchestrator | 2025-05-28 19:18:13.041149 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:18:13.041157 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-28 19:18:13.041166 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 19:18:13.041174 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:18:13.041182 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:18:13.041190 | orchestrator | 2025-05-28 19:18:13.041199 | orchestrator | 2025-05-28 19:18:13.041207 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:18:13.041215 | orchestrator | Wednesday 28 May 2025 19:18:12 +0000 (0:00:03.114) 0:02:18.125 ********* 2025-05-28 19:18:13.041222 | orchestrator | =============================================================================== 2025-05-28 19:18:13.041231 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.76s 2025-05-28 19:18:13.041238 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.15s 2025-05-28 19:18:13.041247 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.73s 2025-05-28 19:18:13.041254 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.24s 2025-05-28 19:18:13.041262 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.11s 2025-05-28 19:18:13.041270 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.11s 2025-05-28 19:18:13.041278 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.94s 2025-05-28 19:18:13.041286 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.72s 2025-05-28 19:18:13.041294 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.37s 2025-05-28 19:18:13.041302 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.14s 2025-05-28 19:18:13.041310 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.02s 2025-05-28 19:18:13.041318 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.99s 2025-05-28 19:18:13.041326 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.87s 2025-05-28 19:18:13.041333 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.84s 2025-05-28 19:18:13.041342 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.73s 2025-05-28 19:18:13.041349 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.54s 2025-05-28 19:18:13.041357 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.51s 2025-05-28 19:18:13.041365 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.23s 2025-05-28 19:18:13.041373 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.20s 2025-05-28 19:18:13.041381 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.03s 2025-05-28 19:18:13.041393 | orchestrator | 2025-05-28 19:18:13 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:13.041408 | orchestrator | 2025-05-28 19:18:13 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:13.041511 | orchestrator | 2025-05-28 19:18:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:13.041523 | orchestrator | 2025-05-28 19:18:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:16.092861 | orchestrator | 2025-05-28 19:18:16 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:16.097321 | orchestrator | 2025-05-28 19:18:16 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:16.098248 | orchestrator | 2025-05-28 19:18:16 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:16.104355 | orchestrator | 2025-05-28 19:18:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:16.104423 | orchestrator | 2025-05-28 19:18:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:19.151190 | orchestrator | 2025-05-28 19:18:19 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:19.151297 | orchestrator | 2025-05-28 19:18:19 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:19.151315 | orchestrator | 2025-05-28 19:18:19 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:19.152933 | orchestrator | 2025-05-28 19:18:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:19.152959 | orchestrator | 2025-05-28 19:18:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:22.197350 | orchestrator | 2025-05-28 19:18:22 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:22.199280 | orchestrator | 2025-05-28 19:18:22 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:22.201133 | orchestrator | 2025-05-28 19:18:22 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:22.202621 | orchestrator | 2025-05-28 19:18:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:22.202649 | orchestrator | 2025-05-28 19:18:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:25.252867 | orchestrator | 2025-05-28 19:18:25 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:25.252963 | orchestrator | 2025-05-28 19:18:25 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:25.255553 | orchestrator | 2025-05-28 19:18:25 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:25.257319 | orchestrator | 2025-05-28 19:18:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:25.257583 | orchestrator | 2025-05-28 19:18:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:28.306951 | orchestrator | 2025-05-28 19:18:28 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:28.310958 | orchestrator | 2025-05-28 19:18:28 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:28.310993 | orchestrator | 2025-05-28 19:18:28 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:28.311005 | orchestrator | 2025-05-28 19:18:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:28.311018 | orchestrator | 2025-05-28 19:18:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:31.368241 | orchestrator | 2025-05-28 19:18:31 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:31.370614 | orchestrator | 2025-05-28 19:18:31 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:31.371759 | orchestrator | 2025-05-28 19:18:31 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:31.376347 | orchestrator | 2025-05-28 19:18:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:31.376387 | orchestrator | 2025-05-28 19:18:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:34.438095 | orchestrator | 2025-05-28 19:18:34 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:34.439363 | orchestrator | 2025-05-28 19:18:34 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:34.442102 | orchestrator | 2025-05-28 19:18:34 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:34.443962 | orchestrator | 2025-05-28 19:18:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:34.446426 | orchestrator | 2025-05-28 19:18:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:37.498255 | orchestrator | 2025-05-28 19:18:37 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:37.498905 | orchestrator | 2025-05-28 19:18:37 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:37.501712 | orchestrator | 2025-05-28 19:18:37 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:37.504024 | orchestrator | 2025-05-28 19:18:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:37.504051 | orchestrator | 2025-05-28 19:18:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:40.554510 | orchestrator | 2025-05-28 19:18:40 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:40.556643 | orchestrator | 2025-05-28 19:18:40 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:40.558186 | orchestrator | 2025-05-28 19:18:40 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:40.559752 | orchestrator | 2025-05-28 19:18:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:40.559780 | orchestrator | 2025-05-28 19:18:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:43.618201 | orchestrator | 2025-05-28 19:18:43 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:43.619519 | orchestrator | 2025-05-28 19:18:43 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:43.620661 | orchestrator | 2025-05-28 19:18:43 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:43.621693 | orchestrator | 2025-05-28 19:18:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:43.621721 | orchestrator | 2025-05-28 19:18:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:46.670945 | orchestrator | 2025-05-28 19:18:46 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:46.676836 | orchestrator | 2025-05-28 19:18:46 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:46.676918 | orchestrator | 2025-05-28 19:18:46 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:46.677534 | orchestrator | 2025-05-28 19:18:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:46.677590 | orchestrator | 2025-05-28 19:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:49.732204 | orchestrator | 2025-05-28 19:18:49 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:49.732941 | orchestrator | 2025-05-28 19:18:49 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:49.734950 | orchestrator | 2025-05-28 19:18:49 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:49.736841 | orchestrator | 2025-05-28 19:18:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:49.737113 | orchestrator | 2025-05-28 19:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:52.777330 | orchestrator | 2025-05-28 19:18:52 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:52.777734 | orchestrator | 2025-05-28 19:18:52 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:52.784758 | orchestrator | 2025-05-28 19:18:52 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:52.784796 | orchestrator | 2025-05-28 19:18:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:52.784809 | orchestrator | 2025-05-28 19:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:55.835597 | orchestrator | 2025-05-28 19:18:55 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:55.835768 | orchestrator | 2025-05-28 19:18:55 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:55.835785 | orchestrator | 2025-05-28 19:18:55 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:55.835795 | orchestrator | 2025-05-28 19:18:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:55.835806 | orchestrator | 2025-05-28 19:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:18:58.881673 | orchestrator | 2025-05-28 19:18:58 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:18:58.883140 | orchestrator | 2025-05-28 19:18:58 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:18:58.884027 | orchestrator | 2025-05-28 19:18:58 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:18:58.884940 | orchestrator | 2025-05-28 19:18:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:18:58.885015 | orchestrator | 2025-05-28 19:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:01.934960 | orchestrator | 2025-05-28 19:19:01 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:01.936334 | orchestrator | 2025-05-28 19:19:01 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:01.938509 | orchestrator | 2025-05-28 19:19:01 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:01.939804 | orchestrator | 2025-05-28 19:19:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:01.939890 | orchestrator | 2025-05-28 19:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:04.983224 | orchestrator | 2025-05-28 19:19:04 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:04.983316 | orchestrator | 2025-05-28 19:19:04 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:04.983355 | orchestrator | 2025-05-28 19:19:04 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:04.984285 | orchestrator | 2025-05-28 19:19:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:04.984310 | orchestrator | 2025-05-28 19:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:08.033133 | orchestrator | 2025-05-28 19:19:08 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:08.035659 | orchestrator | 2025-05-28 19:19:08 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:08.035947 | orchestrator | 2025-05-28 19:19:08 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:08.037661 | orchestrator | 2025-05-28 19:19:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:08.037698 | orchestrator | 2025-05-28 19:19:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:11.074939 | orchestrator | 2025-05-28 19:19:11 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:11.075795 | orchestrator | 2025-05-28 19:19:11 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:11.076807 | orchestrator | 2025-05-28 19:19:11 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:11.077569 | orchestrator | 2025-05-28 19:19:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:11.077601 | orchestrator | 2025-05-28 19:19:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:14.118740 | orchestrator | 2025-05-28 19:19:14 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:14.119588 | orchestrator | 2025-05-28 19:19:14 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:14.121182 | orchestrator | 2025-05-28 19:19:14 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:14.123177 | orchestrator | 2025-05-28 19:19:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:14.123227 | orchestrator | 2025-05-28 19:19:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:17.183765 | orchestrator | 2025-05-28 19:19:17 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:17.184823 | orchestrator | 2025-05-28 19:19:17 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:17.186547 | orchestrator | 2025-05-28 19:19:17 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:17.188196 | orchestrator | 2025-05-28 19:19:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:17.188224 | orchestrator | 2025-05-28 19:19:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:20.233267 | orchestrator | 2025-05-28 19:19:20 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:20.233360 | orchestrator | 2025-05-28 19:19:20 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:20.233497 | orchestrator | 2025-05-28 19:19:20 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:20.237285 | orchestrator | 2025-05-28 19:19:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:20.237316 | orchestrator | 2025-05-28 19:19:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:23.283975 | orchestrator | 2025-05-28 19:19:23 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:23.286579 | orchestrator | 2025-05-28 19:19:23 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:23.286607 | orchestrator | 2025-05-28 19:19:23 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:23.286618 | orchestrator | 2025-05-28 19:19:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:23.286630 | orchestrator | 2025-05-28 19:19:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:26.329272 | orchestrator | 2025-05-28 19:19:26 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:26.330946 | orchestrator | 2025-05-28 19:19:26 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:26.331293 | orchestrator | 2025-05-28 19:19:26 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:26.331993 | orchestrator | 2025-05-28 19:19:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:26.332022 | orchestrator | 2025-05-28 19:19:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:29.371264 | orchestrator | 2025-05-28 19:19:29 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:29.372819 | orchestrator | 2025-05-28 19:19:29 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:29.374871 | orchestrator | 2025-05-28 19:19:29 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:29.376510 | orchestrator | 2025-05-28 19:19:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:29.376572 | orchestrator | 2025-05-28 19:19:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:32.418580 | orchestrator | 2025-05-28 19:19:32 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:32.420452 | orchestrator | 2025-05-28 19:19:32 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:32.422702 | orchestrator | 2025-05-28 19:19:32 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state STARTED 2025-05-28 19:19:32.424700 | orchestrator | 2025-05-28 19:19:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:32.424734 | orchestrator | 2025-05-28 19:19:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:35.484790 | orchestrator | 2025-05-28 19:19:35 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:35.486523 | orchestrator | 2025-05-28 19:19:35 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:35.489155 | orchestrator | 2025-05-28 19:19:35 | INFO  | Task 6988efbc-6369-4bb4-bc8d-02ce25677951 is in state SUCCESS 2025-05-28 19:19:35.492733 | orchestrator | 2025-05-28 19:19:35.492828 | orchestrator | 2025-05-28 19:19:35.492844 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:19:35.492881 | orchestrator | 2025-05-28 19:19:35.492893 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:19:35.492906 | orchestrator | Wednesday 28 May 2025 19:16:54 +0000 (0:00:00.227) 0:00:00.227 ********* 2025-05-28 19:19:35.492917 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:19:35.492929 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:19:35.492941 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:19:35.492976 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.492988 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.492999 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.493010 | orchestrator | 2025-05-28 19:19:35.493047 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:19:35.493059 | orchestrator | Wednesday 28 May 2025 19:16:55 +0000 (0:00:00.984) 0:00:01.212 ********* 2025-05-28 19:19:35.493070 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-28 19:19:35.493081 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-28 19:19:35.493092 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-28 19:19:35.493103 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-28 19:19:35.493114 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-28 19:19:35.493125 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-28 19:19:35.493136 | orchestrator | 2025-05-28 19:19:35.493147 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-28 19:19:35.493158 | orchestrator | 2025-05-28 19:19:35.493169 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-28 19:19:35.493180 | orchestrator | Wednesday 28 May 2025 19:16:57 +0000 (0:00:01.574) 0:00:02.786 ********* 2025-05-28 19:19:35.493192 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:19:35.493204 | orchestrator | 2025-05-28 19:19:35.493215 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-28 19:19:35.493240 | orchestrator | Wednesday 28 May 2025 19:16:58 +0000 (0:00:01.449) 0:00:04.236 ********* 2025-05-28 19:19:35.493256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493360 | orchestrator | 2025-05-28 19:19:35.493373 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-28 19:19:35.493409 | orchestrator | Wednesday 28 May 2025 19:17:00 +0000 (0:00:01.536) 0:00:05.773 ********* 2025-05-28 19:19:35.493423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493506 | orchestrator | 2025-05-28 19:19:35.493519 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-28 19:19:35.493532 | orchestrator | Wednesday 28 May 2025 19:17:02 +0000 (0:00:02.273) 0:00:08.047 ********* 2025-05-28 19:19:35.493545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493593 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493645 | orchestrator | 2025-05-28 19:19:35.493657 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-28 19:19:35.493668 | orchestrator | Wednesday 28 May 2025 19:17:04 +0000 (0:00:01.480) 0:00:09.528 ********* 2025-05-28 19:19:35.493679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493785 | orchestrator | 2025-05-28 19:19:35.493796 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-28 19:19:35.493807 | orchestrator | Wednesday 28 May 2025 19:17:07 +0000 (0:00:03.183) 0:00:12.711 ********* 2025-05-28 19:19:35.493818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493846 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.493898 | orchestrator | 2025-05-28 19:19:35.493909 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-28 19:19:35.493920 | orchestrator | Wednesday 28 May 2025 19:17:09 +0000 (0:00:01.712) 0:00:14.424 ********* 2025-05-28 19:19:35.493931 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:19:35.493943 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:19:35.493953 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:19:35.493964 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.493976 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.493987 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.493998 | orchestrator | 2025-05-28 19:19:35.494009 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-28 19:19:35.494069 | orchestrator | Wednesday 28 May 2025 19:17:12 +0000 (0:00:03.526) 0:00:17.950 ********* 2025-05-28 19:19:35.494081 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-28 19:19:35.494093 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-28 19:19:35.494104 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-28 19:19:35.494121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-28 19:19:35.494132 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-28 19:19:35.494143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-28 19:19:35.494155 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 19:19:35.494166 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 19:19:35.494177 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 19:19:35.494188 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 19:19:35.494198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 19:19:35.494210 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-28 19:19:35.494221 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 19:19:35.494234 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 19:19:35.494245 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 19:19:35.494256 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 19:19:35.494272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 19:19:35.494284 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-28 19:19:35.494295 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 19:19:35.494306 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 19:19:35.494325 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 19:19:35.494336 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 19:19:35.494347 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 19:19:35.494358 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-28 19:19:35.494369 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 19:19:35.494403 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 19:19:35.494414 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 19:19:35.494425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 19:19:35.494435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 19:19:35.494446 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-28 19:19:35.494457 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 19:19:35.494468 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 19:19:35.494479 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 19:19:35.494490 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 19:19:35.494501 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 19:19:35.494513 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-28 19:19:35.494523 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-28 19:19:35.494535 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-28 19:19:35.494546 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-28 19:19:35.494557 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-28 19:19:35.494573 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-28 19:19:35.494584 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-28 19:19:35.494596 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-28 19:19:35.494607 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-28 19:19:35.494618 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-28 19:19:35.494629 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-28 19:19:35.494640 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-28 19:19:35.494652 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-28 19:19:35.494663 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-28 19:19:35.494674 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-28 19:19:35.494691 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-28 19:19:35.494703 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-28 19:19:35.494725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-28 19:19:35.494737 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-28 19:19:35.494748 | orchestrator | 2025-05-28 19:19:35.494759 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 19:19:35.494770 | orchestrator | Wednesday 28 May 2025 19:17:32 +0000 (0:00:20.266) 0:00:38.217 ********* 2025-05-28 19:19:35.494781 | orchestrator | 2025-05-28 19:19:35.494792 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 19:19:35.494803 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:00.094) 0:00:38.312 ********* 2025-05-28 19:19:35.494814 | orchestrator | 2025-05-28 19:19:35.494825 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 19:19:35.494836 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:00.067) 0:00:38.379 ********* 2025-05-28 19:19:35.494847 | orchestrator | 2025-05-28 19:19:35.494858 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 19:19:35.494869 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:00.270) 0:00:38.649 ********* 2025-05-28 19:19:35.494880 | orchestrator | 2025-05-28 19:19:35.494891 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 19:19:35.494902 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:00.061) 0:00:38.710 ********* 2025-05-28 19:19:35.494913 | orchestrator | 2025-05-28 19:19:35.494924 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-28 19:19:35.494935 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:00.060) 0:00:38.771 ********* 2025-05-28 19:19:35.494946 | orchestrator | 2025-05-28 19:19:35.494957 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-28 19:19:35.494968 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:00.061) 0:00:38.833 ********* 2025-05-28 19:19:35.494978 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:19:35.494989 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:19:35.495000 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:19:35.495011 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.495022 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.495033 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.495044 | orchestrator | 2025-05-28 19:19:35.495055 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-28 19:19:35.495066 | orchestrator | Wednesday 28 May 2025 19:17:36 +0000 (0:00:02.439) 0:00:41.272 ********* 2025-05-28 19:19:35.495077 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.495088 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:19:35.495099 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:19:35.495110 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.495121 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:19:35.495132 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.495143 | orchestrator | 2025-05-28 19:19:35.495154 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-28 19:19:35.495165 | orchestrator | 2025-05-28 19:19:35.495176 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-28 19:19:35.495187 | orchestrator | Wednesday 28 May 2025 19:17:59 +0000 (0:00:23.232) 0:01:04.505 ********* 2025-05-28 19:19:35.495198 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:19:35.495236 | orchestrator | 2025-05-28 19:19:35.495248 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-28 19:19:35.495259 | orchestrator | Wednesday 28 May 2025 19:17:59 +0000 (0:00:00.674) 0:01:05.180 ********* 2025-05-28 19:19:35.495271 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:19:35.495282 | orchestrator | 2025-05-28 19:19:35.495299 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-28 19:19:35.495311 | orchestrator | Wednesday 28 May 2025 19:18:01 +0000 (0:00:01.521) 0:01:06.702 ********* 2025-05-28 19:19:35.495322 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.495333 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.495344 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.495356 | orchestrator | 2025-05-28 19:19:35.495367 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-28 19:19:35.495394 | orchestrator | Wednesday 28 May 2025 19:18:02 +0000 (0:00:01.205) 0:01:07.907 ********* 2025-05-28 19:19:35.495406 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.495417 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.495428 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.495439 | orchestrator | 2025-05-28 19:19:35.495450 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-28 19:19:35.495461 | orchestrator | Wednesday 28 May 2025 19:18:03 +0000 (0:00:00.507) 0:01:08.415 ********* 2025-05-28 19:19:35.495472 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.495483 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.495494 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.495505 | orchestrator | 2025-05-28 19:19:35.495516 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-28 19:19:35.495527 | orchestrator | Wednesday 28 May 2025 19:18:04 +0000 (0:00:01.031) 0:01:09.446 ********* 2025-05-28 19:19:35.495538 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.495549 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.495559 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.495570 | orchestrator | 2025-05-28 19:19:35.495581 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-28 19:19:35.495593 | orchestrator | Wednesday 28 May 2025 19:18:04 +0000 (0:00:00.770) 0:01:10.216 ********* 2025-05-28 19:19:35.495603 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.495615 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.495625 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.495636 | orchestrator | 2025-05-28 19:19:35.495647 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-28 19:19:35.495658 | orchestrator | Wednesday 28 May 2025 19:18:05 +0000 (0:00:00.556) 0:01:10.772 ********* 2025-05-28 19:19:35.495670 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.495681 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.495692 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.495703 | orchestrator | 2025-05-28 19:19:35.495714 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-28 19:19:35.495725 | orchestrator | Wednesday 28 May 2025 19:18:06 +0000 (0:00:00.665) 0:01:11.438 ********* 2025-05-28 19:19:35.495736 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.495747 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.495758 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.495769 | orchestrator | 2025-05-28 19:19:35.495780 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-28 19:19:35.495859 | orchestrator | Wednesday 28 May 2025 19:18:06 +0000 (0:00:00.622) 0:01:12.060 ********* 2025-05-28 19:19:35.495884 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.495895 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.495906 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.495917 | orchestrator | 2025-05-28 19:19:35.495928 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-28 19:19:35.495939 | orchestrator | Wednesday 28 May 2025 19:18:07 +0000 (0:00:00.858) 0:01:12.919 ********* 2025-05-28 19:19:35.495958 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.495969 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.495980 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.495991 | orchestrator | 2025-05-28 19:19:35.496002 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-28 19:19:35.496013 | orchestrator | Wednesday 28 May 2025 19:18:08 +0000 (0:00:00.412) 0:01:13.331 ********* 2025-05-28 19:19:35.496024 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496035 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496046 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496056 | orchestrator | 2025-05-28 19:19:35.496067 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-28 19:19:35.496078 | orchestrator | Wednesday 28 May 2025 19:18:08 +0000 (0:00:00.762) 0:01:14.094 ********* 2025-05-28 19:19:35.496089 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496100 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496112 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496122 | orchestrator | 2025-05-28 19:19:35.496134 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-28 19:19:35.496144 | orchestrator | Wednesday 28 May 2025 19:18:09 +0000 (0:00:01.142) 0:01:15.237 ********* 2025-05-28 19:19:35.496155 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496166 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496177 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496188 | orchestrator | 2025-05-28 19:19:35.496199 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-28 19:19:35.496210 | orchestrator | Wednesday 28 May 2025 19:18:10 +0000 (0:00:00.859) 0:01:16.096 ********* 2025-05-28 19:19:35.496221 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496231 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496242 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496253 | orchestrator | 2025-05-28 19:19:35.496264 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-28 19:19:35.496275 | orchestrator | Wednesday 28 May 2025 19:18:11 +0000 (0:00:00.418) 0:01:16.514 ********* 2025-05-28 19:19:35.496286 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496297 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496308 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496339 | orchestrator | 2025-05-28 19:19:35.496351 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-28 19:19:35.496362 | orchestrator | Wednesday 28 May 2025 19:18:11 +0000 (0:00:00.502) 0:01:17.017 ********* 2025-05-28 19:19:35.496373 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496414 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496426 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496437 | orchestrator | 2025-05-28 19:19:35.496456 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-28 19:19:35.496468 | orchestrator | Wednesday 28 May 2025 19:18:12 +0000 (0:00:00.441) 0:01:17.459 ********* 2025-05-28 19:19:35.496479 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496490 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496501 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496512 | orchestrator | 2025-05-28 19:19:35.496523 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-28 19:19:35.496534 | orchestrator | Wednesday 28 May 2025 19:18:12 +0000 (0:00:00.394) 0:01:17.853 ********* 2025-05-28 19:19:35.496545 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496556 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496567 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496578 | orchestrator | 2025-05-28 19:19:35.496588 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-28 19:19:35.496600 | orchestrator | Wednesday 28 May 2025 19:18:13 +0000 (0:00:00.469) 0:01:18.322 ********* 2025-05-28 19:19:35.496618 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:19:35.496629 | orchestrator | 2025-05-28 19:19:35.496640 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-28 19:19:35.496651 | orchestrator | Wednesday 28 May 2025 19:18:13 +0000 (0:00:00.871) 0:01:19.194 ********* 2025-05-28 19:19:35.496662 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.496673 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.496684 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.496695 | orchestrator | 2025-05-28 19:19:35.496706 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-28 19:19:35.496718 | orchestrator | Wednesday 28 May 2025 19:18:14 +0000 (0:00:00.493) 0:01:19.687 ********* 2025-05-28 19:19:35.496729 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.496740 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.496751 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.496762 | orchestrator | 2025-05-28 19:19:35.496773 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-28 19:19:35.496808 | orchestrator | Wednesday 28 May 2025 19:18:15 +0000 (0:00:00.627) 0:01:20.314 ********* 2025-05-28 19:19:35.496821 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496832 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496843 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496854 | orchestrator | 2025-05-28 19:19:35.496865 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-28 19:19:35.496875 | orchestrator | Wednesday 28 May 2025 19:18:15 +0000 (0:00:00.524) 0:01:20.839 ********* 2025-05-28 19:19:35.496886 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496897 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496908 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496919 | orchestrator | 2025-05-28 19:19:35.496930 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-28 19:19:35.496941 | orchestrator | Wednesday 28 May 2025 19:18:16 +0000 (0:00:00.456) 0:01:21.295 ********* 2025-05-28 19:19:35.496951 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.496962 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.496973 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.496984 | orchestrator | 2025-05-28 19:19:35.496995 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-28 19:19:35.497006 | orchestrator | Wednesday 28 May 2025 19:18:16 +0000 (0:00:00.342) 0:01:21.638 ********* 2025-05-28 19:19:35.497017 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.497028 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.497039 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.497049 | orchestrator | 2025-05-28 19:19:35.497060 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-28 19:19:35.497071 | orchestrator | Wednesday 28 May 2025 19:18:16 +0000 (0:00:00.502) 0:01:22.141 ********* 2025-05-28 19:19:35.497082 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.497093 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.497104 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.497115 | orchestrator | 2025-05-28 19:19:35.497126 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-28 19:19:35.497137 | orchestrator | Wednesday 28 May 2025 19:18:17 +0000 (0:00:00.514) 0:01:22.655 ********* 2025-05-28 19:19:35.497148 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.497159 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.497169 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.497180 | orchestrator | 2025-05-28 19:19:35.497191 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-28 19:19:35.497202 | orchestrator | Wednesday 28 May 2025 19:18:17 +0000 (0:00:00.521) 0:01:23.177 ********* 2025-05-28 19:19:35.497214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497401 | orchestrator | 2025-05-28 19:19:35.497413 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-28 19:19:35.497424 | orchestrator | Wednesday 28 May 2025 19:18:19 +0000 (0:00:01.701) 0:01:24.878 ********* 2025-05-28 19:19:35.497455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497572 | orchestrator | 2025-05-28 19:19:35.497583 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-28 19:19:35.497600 | orchestrator | Wednesday 28 May 2025 19:18:24 +0000 (0:00:04.980) 0:01:29.859 ********* 2025-05-28 19:19:35.497612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.497739 | orchestrator | 2025-05-28 19:19:35.497750 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 19:19:35.497761 | orchestrator | Wednesday 28 May 2025 19:18:27 +0000 (0:00:02.890) 0:01:32.749 ********* 2025-05-28 19:19:35.497772 | orchestrator | 2025-05-28 19:19:35.497783 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 19:19:35.497803 | orchestrator | Wednesday 28 May 2025 19:18:27 +0000 (0:00:00.073) 0:01:32.823 ********* 2025-05-28 19:19:35.497821 | orchestrator | 2025-05-28 19:19:35.497840 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 19:19:35.497857 | orchestrator | Wednesday 28 May 2025 19:18:27 +0000 (0:00:00.076) 0:01:32.899 ********* 2025-05-28 19:19:35.497876 | orchestrator | 2025-05-28 19:19:35.497894 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-28 19:19:35.497914 | orchestrator | Wednesday 28 May 2025 19:18:27 +0000 (0:00:00.074) 0:01:32.974 ********* 2025-05-28 19:19:35.497925 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.497936 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.497947 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.497958 | orchestrator | 2025-05-28 19:19:35.497969 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-28 19:19:35.497980 | orchestrator | Wednesday 28 May 2025 19:18:35 +0000 (0:00:07.828) 0:01:40.802 ********* 2025-05-28 19:19:35.497991 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.498002 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.498013 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.498074 | orchestrator | 2025-05-28 19:19:35.498086 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-28 19:19:35.498097 | orchestrator | Wednesday 28 May 2025 19:18:43 +0000 (0:00:07.794) 0:01:48.597 ********* 2025-05-28 19:19:35.498108 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.498119 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.498130 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.498141 | orchestrator | 2025-05-28 19:19:35.498152 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-28 19:19:35.498162 | orchestrator | Wednesday 28 May 2025 19:18:51 +0000 (0:00:08.027) 0:01:56.625 ********* 2025-05-28 19:19:35.498173 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.498184 | orchestrator | 2025-05-28 19:19:35.498195 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-28 19:19:35.498206 | orchestrator | Wednesday 28 May 2025 19:18:51 +0000 (0:00:00.139) 0:01:56.764 ********* 2025-05-28 19:19:35.498216 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.498228 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.498239 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.498250 | orchestrator | 2025-05-28 19:19:35.498268 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-28 19:19:35.498280 | orchestrator | Wednesday 28 May 2025 19:18:52 +0000 (0:00:00.965) 0:01:57.730 ********* 2025-05-28 19:19:35.498291 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.498302 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.498313 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.498324 | orchestrator | 2025-05-28 19:19:35.498335 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-28 19:19:35.498346 | orchestrator | Wednesday 28 May 2025 19:18:53 +0000 (0:00:00.600) 0:01:58.330 ********* 2025-05-28 19:19:35.498357 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.498368 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.498516 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.498555 | orchestrator | 2025-05-28 19:19:35.498568 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-28 19:19:35.498579 | orchestrator | Wednesday 28 May 2025 19:18:54 +0000 (0:00:01.023) 0:01:59.353 ********* 2025-05-28 19:19:35.498590 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.498601 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.498624 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.498635 | orchestrator | 2025-05-28 19:19:35.498646 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-28 19:19:35.498657 | orchestrator | Wednesday 28 May 2025 19:18:54 +0000 (0:00:00.589) 0:01:59.942 ********* 2025-05-28 19:19:35.498668 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.498679 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.498690 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.498701 | orchestrator | 2025-05-28 19:19:35.498712 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-28 19:19:35.498723 | orchestrator | Wednesday 28 May 2025 19:18:55 +0000 (0:00:01.108) 0:02:01.051 ********* 2025-05-28 19:19:35.498733 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.498744 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.498755 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.498766 | orchestrator | 2025-05-28 19:19:35.498777 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-28 19:19:35.498788 | orchestrator | Wednesday 28 May 2025 19:18:56 +0000 (0:00:00.814) 0:02:01.866 ********* 2025-05-28 19:19:35.498799 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.498814 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.498823 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.498831 | orchestrator | 2025-05-28 19:19:35.498839 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-28 19:19:35.498847 | orchestrator | Wednesday 28 May 2025 19:18:57 +0000 (0:00:00.455) 0:02:02.321 ********* 2025-05-28 19:19:35.498855 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498864 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498873 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498882 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498890 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498921 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498930 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498939 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498947 | orchestrator | 2025-05-28 19:19:35.498955 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-28 19:19:35.498963 | orchestrator | Wednesday 28 May 2025 19:18:58 +0000 (0:00:01.915) 0:02:04.237 ********* 2025-05-28 19:19:35.498979 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498988 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.498996 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499004 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499040 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499065 | orchestrator | 2025-05-28 19:19:35.499073 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-28 19:19:35.499081 | orchestrator | Wednesday 28 May 2025 19:19:03 +0000 (0:00:04.615) 0:02:08.853 ********* 2025-05-28 19:19:35.499093 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499101 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499110 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499118 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499134 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499148 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499170 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:19:35.499178 | orchestrator | 2025-05-28 19:19:35.499186 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 19:19:35.499194 | orchestrator | Wednesday 28 May 2025 19:19:06 +0000 (0:00:03.302) 0:02:12.156 ********* 2025-05-28 19:19:35.499202 | orchestrator | 2025-05-28 19:19:35.499210 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 19:19:35.499218 | orchestrator | Wednesday 28 May 2025 19:19:06 +0000 (0:00:00.055) 0:02:12.211 ********* 2025-05-28 19:19:35.499226 | orchestrator | 2025-05-28 19:19:35.499234 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-28 19:19:35.499242 | orchestrator | Wednesday 28 May 2025 19:19:07 +0000 (0:00:00.225) 0:02:12.437 ********* 2025-05-28 19:19:35.499250 | orchestrator | 2025-05-28 19:19:35.499258 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-28 19:19:35.499266 | orchestrator | Wednesday 28 May 2025 19:19:07 +0000 (0:00:00.057) 0:02:12.495 ********* 2025-05-28 19:19:35.499274 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.499282 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.499291 | orchestrator | 2025-05-28 19:19:35.499299 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-28 19:19:35.499311 | orchestrator | Wednesday 28 May 2025 19:19:13 +0000 (0:00:06.306) 0:02:18.801 ********* 2025-05-28 19:19:35.499320 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.499328 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.499336 | orchestrator | 2025-05-28 19:19:35.499344 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-28 19:19:35.499352 | orchestrator | Wednesday 28 May 2025 19:19:20 +0000 (0:00:06.573) 0:02:25.374 ********* 2025-05-28 19:19:35.499360 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:19:35.499369 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:19:35.499399 | orchestrator | 2025-05-28 19:19:35.499410 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-28 19:19:35.499418 | orchestrator | Wednesday 28 May 2025 19:19:26 +0000 (0:00:06.430) 0:02:31.805 ********* 2025-05-28 19:19:35.499426 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:19:35.499434 | orchestrator | 2025-05-28 19:19:35.499442 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-28 19:19:35.499450 | orchestrator | Wednesday 28 May 2025 19:19:26 +0000 (0:00:00.267) 0:02:32.072 ********* 2025-05-28 19:19:35.499458 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.499466 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.499479 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.499487 | orchestrator | 2025-05-28 19:19:35.499495 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-28 19:19:35.499503 | orchestrator | Wednesday 28 May 2025 19:19:27 +0000 (0:00:00.869) 0:02:32.941 ********* 2025-05-28 19:19:35.499511 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.499519 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.499527 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.499535 | orchestrator | 2025-05-28 19:19:35.499543 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-28 19:19:35.499551 | orchestrator | Wednesday 28 May 2025 19:19:28 +0000 (0:00:00.734) 0:02:33.676 ********* 2025-05-28 19:19:35.499559 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.499567 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.499575 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.499583 | orchestrator | 2025-05-28 19:19:35.499591 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-28 19:19:35.499599 | orchestrator | Wednesday 28 May 2025 19:19:29 +0000 (0:00:01.004) 0:02:34.680 ********* 2025-05-28 19:19:35.499607 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:19:35.499615 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:19:35.499623 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:19:35.499631 | orchestrator | 2025-05-28 19:19:35.499639 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-28 19:19:35.499647 | orchestrator | Wednesday 28 May 2025 19:19:30 +0000 (0:00:00.755) 0:02:35.436 ********* 2025-05-28 19:19:35.499655 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.499663 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.499671 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.499679 | orchestrator | 2025-05-28 19:19:35.499687 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-28 19:19:35.499695 | orchestrator | Wednesday 28 May 2025 19:19:31 +0000 (0:00:00.905) 0:02:36.341 ********* 2025-05-28 19:19:35.499703 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:19:35.499711 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:19:35.499719 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:19:35.499726 | orchestrator | 2025-05-28 19:19:35.499734 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:19:35.499743 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-28 19:19:35.499751 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-28 19:19:35.499764 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-28 19:19:35.499772 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:19:35.499781 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:19:35.499789 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:19:35.499797 | orchestrator | 2025-05-28 19:19:35.499805 | orchestrator | 2025-05-28 19:19:35.499813 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:19:35.499821 | orchestrator | Wednesday 28 May 2025 19:19:32 +0000 (0:00:01.296) 0:02:37.637 ********* 2025-05-28 19:19:35.499829 | orchestrator | =============================================================================== 2025-05-28 19:19:35.499837 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.23s 2025-05-28 19:19:35.499845 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.27s 2025-05-28 19:19:35.499858 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.46s 2025-05-28 19:19:35.499866 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.37s 2025-05-28 19:19:35.499874 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.13s 2025-05-28 19:19:35.499882 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.98s 2025-05-28 19:19:35.499890 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.62s 2025-05-28 19:19:35.499898 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.53s 2025-05-28 19:19:35.499910 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.30s 2025-05-28 19:19:35.499918 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.18s 2025-05-28 19:19:35.499926 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.89s 2025-05-28 19:19:35.499934 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.44s 2025-05-28 19:19:35.499942 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.27s 2025-05-28 19:19:35.499949 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.92s 2025-05-28 19:19:35.499957 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.71s 2025-05-28 19:19:35.499965 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.70s 2025-05-28 19:19:35.499974 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.57s 2025-05-28 19:19:35.499981 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.54s 2025-05-28 19:19:35.499989 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.52s 2025-05-28 19:19:35.499997 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.48s 2025-05-28 19:19:35.500005 | orchestrator | 2025-05-28 19:19:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:35.500014 | orchestrator | 2025-05-28 19:19:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:38.541210 | orchestrator | 2025-05-28 19:19:38 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:38.541629 | orchestrator | 2025-05-28 19:19:38 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:38.542335 | orchestrator | 2025-05-28 19:19:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:38.542406 | orchestrator | 2025-05-28 19:19:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:41.583843 | orchestrator | 2025-05-28 19:19:41 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:41.583939 | orchestrator | 2025-05-28 19:19:41 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:41.584190 | orchestrator | 2025-05-28 19:19:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:41.584214 | orchestrator | 2025-05-28 19:19:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:44.635450 | orchestrator | 2025-05-28 19:19:44 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:44.636283 | orchestrator | 2025-05-28 19:19:44 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:44.637400 | orchestrator | 2025-05-28 19:19:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:44.637424 | orchestrator | 2025-05-28 19:19:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:47.703013 | orchestrator | 2025-05-28 19:19:47 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:47.704158 | orchestrator | 2025-05-28 19:19:47 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:47.705584 | orchestrator | 2025-05-28 19:19:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:47.705611 | orchestrator | 2025-05-28 19:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:50.760026 | orchestrator | 2025-05-28 19:19:50 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:50.761720 | orchestrator | 2025-05-28 19:19:50 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:50.765702 | orchestrator | 2025-05-28 19:19:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:50.765745 | orchestrator | 2025-05-28 19:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:53.816060 | orchestrator | 2025-05-28 19:19:53 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:53.817833 | orchestrator | 2025-05-28 19:19:53 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:53.819547 | orchestrator | 2025-05-28 19:19:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:53.819847 | orchestrator | 2025-05-28 19:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:56.867667 | orchestrator | 2025-05-28 19:19:56 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:56.869681 | orchestrator | 2025-05-28 19:19:56 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:56.871671 | orchestrator | 2025-05-28 19:19:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:56.871707 | orchestrator | 2025-05-28 19:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:19:59.926866 | orchestrator | 2025-05-28 19:19:59 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:19:59.928115 | orchestrator | 2025-05-28 19:19:59 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:19:59.929957 | orchestrator | 2025-05-28 19:19:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:19:59.929980 | orchestrator | 2025-05-28 19:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:02.977587 | orchestrator | 2025-05-28 19:20:02 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:02.979068 | orchestrator | 2025-05-28 19:20:02 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:02.979754 | orchestrator | 2025-05-28 19:20:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:02.980117 | orchestrator | 2025-05-28 19:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:06.051979 | orchestrator | 2025-05-28 19:20:06 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:06.053392 | orchestrator | 2025-05-28 19:20:06 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:06.055801 | orchestrator | 2025-05-28 19:20:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:06.055827 | orchestrator | 2025-05-28 19:20:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:09.102821 | orchestrator | 2025-05-28 19:20:09 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:09.103673 | orchestrator | 2025-05-28 19:20:09 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:09.105003 | orchestrator | 2025-05-28 19:20:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:09.105024 | orchestrator | 2025-05-28 19:20:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:12.150698 | orchestrator | 2025-05-28 19:20:12 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:12.151855 | orchestrator | 2025-05-28 19:20:12 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:12.152994 | orchestrator | 2025-05-28 19:20:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:12.153020 | orchestrator | 2025-05-28 19:20:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:15.208635 | orchestrator | 2025-05-28 19:20:15 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:15.209951 | orchestrator | 2025-05-28 19:20:15 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:15.210652 | orchestrator | 2025-05-28 19:20:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:15.210685 | orchestrator | 2025-05-28 19:20:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:18.276119 | orchestrator | 2025-05-28 19:20:18 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:18.276228 | orchestrator | 2025-05-28 19:20:18 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:18.277296 | orchestrator | 2025-05-28 19:20:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:18.277318 | orchestrator | 2025-05-28 19:20:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:21.328019 | orchestrator | 2025-05-28 19:20:21 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:21.329549 | orchestrator | 2025-05-28 19:20:21 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:21.331170 | orchestrator | 2025-05-28 19:20:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:21.331646 | orchestrator | 2025-05-28 19:20:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:24.380389 | orchestrator | 2025-05-28 19:20:24 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:24.383159 | orchestrator | 2025-05-28 19:20:24 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:24.385689 | orchestrator | 2025-05-28 19:20:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:24.385733 | orchestrator | 2025-05-28 19:20:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:27.437958 | orchestrator | 2025-05-28 19:20:27 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:27.441544 | orchestrator | 2025-05-28 19:20:27 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:27.442461 | orchestrator | 2025-05-28 19:20:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:27.442489 | orchestrator | 2025-05-28 19:20:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:30.493072 | orchestrator | 2025-05-28 19:20:30 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:30.493181 | orchestrator | 2025-05-28 19:20:30 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:30.493906 | orchestrator | 2025-05-28 19:20:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:30.493969 | orchestrator | 2025-05-28 19:20:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:33.544847 | orchestrator | 2025-05-28 19:20:33 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:33.546845 | orchestrator | 2025-05-28 19:20:33 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:33.546883 | orchestrator | 2025-05-28 19:20:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:33.546896 | orchestrator | 2025-05-28 19:20:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:36.597011 | orchestrator | 2025-05-28 19:20:36 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:36.601452 | orchestrator | 2025-05-28 19:20:36 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:36.601519 | orchestrator | 2025-05-28 19:20:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:36.601534 | orchestrator | 2025-05-28 19:20:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:39.653656 | orchestrator | 2025-05-28 19:20:39 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:39.653770 | orchestrator | 2025-05-28 19:20:39 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:39.654655 | orchestrator | 2025-05-28 19:20:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:39.654727 | orchestrator | 2025-05-28 19:20:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:42.706075 | orchestrator | 2025-05-28 19:20:42 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:42.706935 | orchestrator | 2025-05-28 19:20:42 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:42.707239 | orchestrator | 2025-05-28 19:20:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:42.707267 | orchestrator | 2025-05-28 19:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:45.757125 | orchestrator | 2025-05-28 19:20:45 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:45.758579 | orchestrator | 2025-05-28 19:20:45 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:45.759137 | orchestrator | 2025-05-28 19:20:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:45.759163 | orchestrator | 2025-05-28 19:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:48.808977 | orchestrator | 2025-05-28 19:20:48 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:48.811371 | orchestrator | 2025-05-28 19:20:48 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:48.813861 | orchestrator | 2025-05-28 19:20:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:48.813949 | orchestrator | 2025-05-28 19:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:51.857570 | orchestrator | 2025-05-28 19:20:51 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:51.857689 | orchestrator | 2025-05-28 19:20:51 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:51.858063 | orchestrator | 2025-05-28 19:20:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:51.858116 | orchestrator | 2025-05-28 19:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:54.906735 | orchestrator | 2025-05-28 19:20:54 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:54.906845 | orchestrator | 2025-05-28 19:20:54 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:54.906860 | orchestrator | 2025-05-28 19:20:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:54.906873 | orchestrator | 2025-05-28 19:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:20:57.956452 | orchestrator | 2025-05-28 19:20:57 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:20:57.956625 | orchestrator | 2025-05-28 19:20:57 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:20:57.957101 | orchestrator | 2025-05-28 19:20:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:20:57.957127 | orchestrator | 2025-05-28 19:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:01.013639 | orchestrator | 2025-05-28 19:21:01 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:01.013737 | orchestrator | 2025-05-28 19:21:01 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:01.014385 | orchestrator | 2025-05-28 19:21:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:01.014466 | orchestrator | 2025-05-28 19:21:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:04.063984 | orchestrator | 2025-05-28 19:21:04 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:04.065692 | orchestrator | 2025-05-28 19:21:04 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:04.067871 | orchestrator | 2025-05-28 19:21:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:04.067905 | orchestrator | 2025-05-28 19:21:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:07.108122 | orchestrator | 2025-05-28 19:21:07 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:07.109409 | orchestrator | 2025-05-28 19:21:07 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:07.111400 | orchestrator | 2025-05-28 19:21:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:07.111426 | orchestrator | 2025-05-28 19:21:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:10.156225 | orchestrator | 2025-05-28 19:21:10 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:10.158090 | orchestrator | 2025-05-28 19:21:10 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:10.161981 | orchestrator | 2025-05-28 19:21:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:10.162053 | orchestrator | 2025-05-28 19:21:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:13.214439 | orchestrator | 2025-05-28 19:21:13 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:13.217142 | orchestrator | 2025-05-28 19:21:13 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:13.219486 | orchestrator | 2025-05-28 19:21:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:13.219545 | orchestrator | 2025-05-28 19:21:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:16.276720 | orchestrator | 2025-05-28 19:21:16 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:16.278543 | orchestrator | 2025-05-28 19:21:16 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:16.279163 | orchestrator | 2025-05-28 19:21:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:16.279188 | orchestrator | 2025-05-28 19:21:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:19.328905 | orchestrator | 2025-05-28 19:21:19 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:19.330013 | orchestrator | 2025-05-28 19:21:19 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:19.330626 | orchestrator | 2025-05-28 19:21:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:19.330701 | orchestrator | 2025-05-28 19:21:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:22.393533 | orchestrator | 2025-05-28 19:21:22 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:22.393636 | orchestrator | 2025-05-28 19:21:22 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:22.393652 | orchestrator | 2025-05-28 19:21:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:22.393664 | orchestrator | 2025-05-28 19:21:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:25.443279 | orchestrator | 2025-05-28 19:21:25 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:25.445818 | orchestrator | 2025-05-28 19:21:25 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:25.446496 | orchestrator | 2025-05-28 19:21:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:25.446825 | orchestrator | 2025-05-28 19:21:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:28.497115 | orchestrator | 2025-05-28 19:21:28 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:28.497520 | orchestrator | 2025-05-28 19:21:28 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:28.499155 | orchestrator | 2025-05-28 19:21:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:28.499633 | orchestrator | 2025-05-28 19:21:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:31.552049 | orchestrator | 2025-05-28 19:21:31 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:31.553969 | orchestrator | 2025-05-28 19:21:31 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:31.555513 | orchestrator | 2025-05-28 19:21:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:31.555554 | orchestrator | 2025-05-28 19:21:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:34.615129 | orchestrator | 2025-05-28 19:21:34 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:34.616120 | orchestrator | 2025-05-28 19:21:34 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:34.617974 | orchestrator | 2025-05-28 19:21:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:34.618008 | orchestrator | 2025-05-28 19:21:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:37.677985 | orchestrator | 2025-05-28 19:21:37 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:37.679531 | orchestrator | 2025-05-28 19:21:37 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:37.683666 | orchestrator | 2025-05-28 19:21:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:37.684024 | orchestrator | 2025-05-28 19:21:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:40.743434 | orchestrator | 2025-05-28 19:21:40 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:40.743537 | orchestrator | 2025-05-28 19:21:40 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:40.744783 | orchestrator | 2025-05-28 19:21:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:40.744811 | orchestrator | 2025-05-28 19:21:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:43.801131 | orchestrator | 2025-05-28 19:21:43 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:43.802177 | orchestrator | 2025-05-28 19:21:43 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:43.803833 | orchestrator | 2025-05-28 19:21:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:43.803854 | orchestrator | 2025-05-28 19:21:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:46.855404 | orchestrator | 2025-05-28 19:21:46 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:46.855510 | orchestrator | 2025-05-28 19:21:46 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:46.855526 | orchestrator | 2025-05-28 19:21:46 | INFO  | Task 36949142-e106-4ae6-854a-5034d975da5f is in state STARTED 2025-05-28 19:21:46.855539 | orchestrator | 2025-05-28 19:21:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:46.855550 | orchestrator | 2025-05-28 19:21:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:49.901308 | orchestrator | 2025-05-28 19:21:49 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:49.904200 | orchestrator | 2025-05-28 19:21:49 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:49.905432 | orchestrator | 2025-05-28 19:21:49 | INFO  | Task 36949142-e106-4ae6-854a-5034d975da5f is in state STARTED 2025-05-28 19:21:49.906181 | orchestrator | 2025-05-28 19:21:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:49.907001 | orchestrator | 2025-05-28 19:21:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:52.956950 | orchestrator | 2025-05-28 19:21:52 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:52.958426 | orchestrator | 2025-05-28 19:21:52 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:52.960790 | orchestrator | 2025-05-28 19:21:52 | INFO  | Task 36949142-e106-4ae6-854a-5034d975da5f is in state STARTED 2025-05-28 19:21:52.962283 | orchestrator | 2025-05-28 19:21:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:52.962449 | orchestrator | 2025-05-28 19:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:56.025260 | orchestrator | 2025-05-28 19:21:56 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:56.028160 | orchestrator | 2025-05-28 19:21:56 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:56.031608 | orchestrator | 2025-05-28 19:21:56 | INFO  | Task 36949142-e106-4ae6-854a-5034d975da5f is in state STARTED 2025-05-28 19:21:56.033958 | orchestrator | 2025-05-28 19:21:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:56.034117 | orchestrator | 2025-05-28 19:21:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:21:59.085793 | orchestrator | 2025-05-28 19:21:59 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:21:59.085880 | orchestrator | 2025-05-28 19:21:59 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:21:59.086517 | orchestrator | 2025-05-28 19:21:59 | INFO  | Task 36949142-e106-4ae6-854a-5034d975da5f is in state SUCCESS 2025-05-28 19:21:59.092342 | orchestrator | 2025-05-28 19:21:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:21:59.093767 | orchestrator | 2025-05-28 19:21:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:02.166274 | orchestrator | 2025-05-28 19:22:02 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:02.168020 | orchestrator | 2025-05-28 19:22:02 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:02.168770 | orchestrator | 2025-05-28 19:22:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:02.168937 | orchestrator | 2025-05-28 19:22:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:05.226452 | orchestrator | 2025-05-28 19:22:05 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:05.230185 | orchestrator | 2025-05-28 19:22:05 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:05.232393 | orchestrator | 2025-05-28 19:22:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:05.232419 | orchestrator | 2025-05-28 19:22:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:08.280708 | orchestrator | 2025-05-28 19:22:08 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:08.283344 | orchestrator | 2025-05-28 19:22:08 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:08.285433 | orchestrator | 2025-05-28 19:22:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:08.285504 | orchestrator | 2025-05-28 19:22:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:11.338840 | orchestrator | 2025-05-28 19:22:11 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:11.339599 | orchestrator | 2025-05-28 19:22:11 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:11.341743 | orchestrator | 2025-05-28 19:22:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:11.341757 | orchestrator | 2025-05-28 19:22:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:14.396839 | orchestrator | 2025-05-28 19:22:14 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:14.398222 | orchestrator | 2025-05-28 19:22:14 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:14.399962 | orchestrator | 2025-05-28 19:22:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:14.400090 | orchestrator | 2025-05-28 19:22:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:17.461848 | orchestrator | 2025-05-28 19:22:17 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:17.462081 | orchestrator | 2025-05-28 19:22:17 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:17.462897 | orchestrator | 2025-05-28 19:22:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:17.463003 | orchestrator | 2025-05-28 19:22:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:20.532131 | orchestrator | 2025-05-28 19:22:20 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:20.534183 | orchestrator | 2025-05-28 19:22:20 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:20.535684 | orchestrator | 2025-05-28 19:22:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:20.535726 | orchestrator | 2025-05-28 19:22:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:23.591419 | orchestrator | 2025-05-28 19:22:23 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:23.594180 | orchestrator | 2025-05-28 19:22:23 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:23.596642 | orchestrator | 2025-05-28 19:22:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:23.596882 | orchestrator | 2025-05-28 19:22:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:26.650899 | orchestrator | 2025-05-28 19:22:26 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:26.652130 | orchestrator | 2025-05-28 19:22:26 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:26.654360 | orchestrator | 2025-05-28 19:22:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:26.654397 | orchestrator | 2025-05-28 19:22:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:29.714216 | orchestrator | 2025-05-28 19:22:29 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:29.715769 | orchestrator | 2025-05-28 19:22:29 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:29.716828 | orchestrator | 2025-05-28 19:22:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:29.717028 | orchestrator | 2025-05-28 19:22:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:32.767750 | orchestrator | 2025-05-28 19:22:32 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:32.767870 | orchestrator | 2025-05-28 19:22:32 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:32.768204 | orchestrator | 2025-05-28 19:22:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:32.768232 | orchestrator | 2025-05-28 19:22:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:35.826230 | orchestrator | 2025-05-28 19:22:35 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:35.826999 | orchestrator | 2025-05-28 19:22:35 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:35.828616 | orchestrator | 2025-05-28 19:22:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:35.829106 | orchestrator | 2025-05-28 19:22:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:38.892401 | orchestrator | 2025-05-28 19:22:38 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:38.892990 | orchestrator | 2025-05-28 19:22:38 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:38.895012 | orchestrator | 2025-05-28 19:22:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:38.895046 | orchestrator | 2025-05-28 19:22:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:41.945785 | orchestrator | 2025-05-28 19:22:41 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:41.945876 | orchestrator | 2025-05-28 19:22:41 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:41.947394 | orchestrator | 2025-05-28 19:22:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:41.948341 | orchestrator | 2025-05-28 19:22:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:44.997022 | orchestrator | 2025-05-28 19:22:44 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:44.999175 | orchestrator | 2025-05-28 19:22:44 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:45.002598 | orchestrator | 2025-05-28 19:22:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:45.002634 | orchestrator | 2025-05-28 19:22:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:48.062678 | orchestrator | 2025-05-28 19:22:48 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:48.064490 | orchestrator | 2025-05-28 19:22:48 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:48.065912 | orchestrator | 2025-05-28 19:22:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:48.065947 | orchestrator | 2025-05-28 19:22:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:51.127494 | orchestrator | 2025-05-28 19:22:51 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:51.128706 | orchestrator | 2025-05-28 19:22:51 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:51.130331 | orchestrator | 2025-05-28 19:22:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:51.130361 | orchestrator | 2025-05-28 19:22:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:54.177744 | orchestrator | 2025-05-28 19:22:54 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:54.177851 | orchestrator | 2025-05-28 19:22:54 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:54.180905 | orchestrator | 2025-05-28 19:22:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:54.180936 | orchestrator | 2025-05-28 19:22:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:22:57.215039 | orchestrator | 2025-05-28 19:22:57 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:22:57.218488 | orchestrator | 2025-05-28 19:22:57 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:22:57.220899 | orchestrator | 2025-05-28 19:22:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:22:57.220925 | orchestrator | 2025-05-28 19:22:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:00.269448 | orchestrator | 2025-05-28 19:23:00 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:00.272476 | orchestrator | 2025-05-28 19:23:00 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:00.275249 | orchestrator | 2025-05-28 19:23:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:00.275555 | orchestrator | 2025-05-28 19:23:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:03.331123 | orchestrator | 2025-05-28 19:23:03 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:03.332747 | orchestrator | 2025-05-28 19:23:03 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:03.334569 | orchestrator | 2025-05-28 19:23:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:03.334612 | orchestrator | 2025-05-28 19:23:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:06.386346 | orchestrator | 2025-05-28 19:23:06 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:06.386908 | orchestrator | 2025-05-28 19:23:06 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:06.388959 | orchestrator | 2025-05-28 19:23:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:06.390414 | orchestrator | 2025-05-28 19:23:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:09.430401 | orchestrator | 2025-05-28 19:23:09 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:09.431057 | orchestrator | 2025-05-28 19:23:09 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:09.431931 | orchestrator | 2025-05-28 19:23:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:09.431961 | orchestrator | 2025-05-28 19:23:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:12.486462 | orchestrator | 2025-05-28 19:23:12 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:12.486911 | orchestrator | 2025-05-28 19:23:12 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:12.490652 | orchestrator | 2025-05-28 19:23:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:12.490683 | orchestrator | 2025-05-28 19:23:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:15.550612 | orchestrator | 2025-05-28 19:23:15 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:15.551052 | orchestrator | 2025-05-28 19:23:15 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:15.552825 | orchestrator | 2025-05-28 19:23:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:15.552851 | orchestrator | 2025-05-28 19:23:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:18.599707 | orchestrator | 2025-05-28 19:23:18 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:18.600752 | orchestrator | 2025-05-28 19:23:18 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:18.602942 | orchestrator | 2025-05-28 19:23:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:18.603458 | orchestrator | 2025-05-28 19:23:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:21.648371 | orchestrator | 2025-05-28 19:23:21 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state STARTED 2025-05-28 19:23:21.651806 | orchestrator | 2025-05-28 19:23:21 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:21.654633 | orchestrator | 2025-05-28 19:23:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:21.654948 | orchestrator | 2025-05-28 19:23:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:24.711868 | orchestrator | 2025-05-28 19:23:24 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:24.726663 | orchestrator | 2025-05-28 19:23:24 | INFO  | Task e4512d64-cabc-42b2-8801-daf91dfc1545 is in state SUCCESS 2025-05-28 19:23:24.729065 | orchestrator | 2025-05-28 19:23:24.729118 | orchestrator | None 2025-05-28 19:23:24.729132 | orchestrator | 2025-05-28 19:23:24.729144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:23:24.729156 | orchestrator | 2025-05-28 19:23:24.729167 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:23:24.729179 | orchestrator | Wednesday 28 May 2025 19:15:32 +0000 (0:00:00.267) 0:00:00.267 ********* 2025-05-28 19:23:24.729190 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.729203 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.729214 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.729224 | orchestrator | 2025-05-28 19:23:24.729236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:23:24.729247 | orchestrator | Wednesday 28 May 2025 19:15:33 +0000 (0:00:00.398) 0:00:00.666 ********* 2025-05-28 19:23:24.729288 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-28 19:23:24.729300 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-28 19:23:24.729311 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-28 19:23:24.729322 | orchestrator | 2025-05-28 19:23:24.729333 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-28 19:23:24.729344 | orchestrator | 2025-05-28 19:23:24.729355 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-28 19:23:24.729366 | orchestrator | Wednesday 28 May 2025 19:15:33 +0000 (0:00:00.321) 0:00:00.987 ********* 2025-05-28 19:23:24.729377 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.729388 | orchestrator | 2025-05-28 19:23:24.729415 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-28 19:23:24.729426 | orchestrator | Wednesday 28 May 2025 19:15:34 +0000 (0:00:00.769) 0:00:01.756 ********* 2025-05-28 19:23:24.729437 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.729448 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.729459 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.729470 | orchestrator | 2025-05-28 19:23:24.729481 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-28 19:23:24.729492 | orchestrator | Wednesday 28 May 2025 19:15:35 +0000 (0:00:00.887) 0:00:02.644 ********* 2025-05-28 19:23:24.729503 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.729514 | orchestrator | 2025-05-28 19:23:24.729525 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-28 19:23:24.729537 | orchestrator | Wednesday 28 May 2025 19:15:35 +0000 (0:00:00.680) 0:00:03.324 ********* 2025-05-28 19:23:24.729548 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.729559 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.729570 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.729581 | orchestrator | 2025-05-28 19:23:24.729592 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-28 19:23:24.729603 | orchestrator | Wednesday 28 May 2025 19:15:36 +0000 (0:00:00.968) 0:00:04.293 ********* 2025-05-28 19:23:24.729614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-28 19:23:24.729625 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-28 19:23:24.729636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-28 19:23:24.729955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-28 19:23:24.729968 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-28 19:23:24.729980 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-28 19:23:24.729991 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-28 19:23:24.730003 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-28 19:23:24.730014 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-28 19:23:24.730077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-28 19:23:24.730088 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-28 19:23:24.730099 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-28 19:23:24.730110 | orchestrator | 2025-05-28 19:23:24.730121 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-28 19:23:24.730132 | orchestrator | Wednesday 28 May 2025 19:15:40 +0000 (0:00:03.631) 0:00:07.925 ********* 2025-05-28 19:23:24.730143 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-28 19:23:24.730154 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-28 19:23:24.730165 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-28 19:23:24.730177 | orchestrator | 2025-05-28 19:23:24.730188 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-28 19:23:24.730199 | orchestrator | Wednesday 28 May 2025 19:15:41 +0000 (0:00:00.995) 0:00:08.920 ********* 2025-05-28 19:23:24.730210 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-28 19:23:24.730221 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-28 19:23:24.730232 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-28 19:23:24.730243 | orchestrator | 2025-05-28 19:23:24.730254 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-28 19:23:24.730295 | orchestrator | Wednesday 28 May 2025 19:15:43 +0000 (0:00:01.809) 0:00:10.729 ********* 2025-05-28 19:23:24.730306 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-28 19:23:24.730353 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.730383 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-28 19:23:24.730395 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.730454 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-28 19:23:24.730465 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.730476 | orchestrator | 2025-05-28 19:23:24.730487 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-28 19:23:24.730498 | orchestrator | Wednesday 28 May 2025 19:15:44 +0000 (0:00:01.093) 0:00:11.823 ********* 2025-05-28 19:23:24.730526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.730563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.730586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.730598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.730611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.730630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.730643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.730661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.730681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.730693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.730705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.730717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.730728 | orchestrator | 2025-05-28 19:23:24.730740 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-28 19:23:24.730751 | orchestrator | Wednesday 28 May 2025 19:15:47 +0000 (0:00:03.187) 0:00:15.011 ********* 2025-05-28 19:23:24.730762 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.730773 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.730784 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.730795 | orchestrator | 2025-05-28 19:23:24.730812 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-28 19:23:24.730824 | orchestrator | Wednesday 28 May 2025 19:15:50 +0000 (0:00:02.959) 0:00:17.970 ********* 2025-05-28 19:23:24.730835 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-28 19:23:24.730846 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-28 19:23:24.730857 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-28 19:23:24.730868 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-28 19:23:24.730879 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-28 19:23:24.730889 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-28 19:23:24.730907 | orchestrator | 2025-05-28 19:23:24.730918 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-28 19:23:24.730929 | orchestrator | Wednesday 28 May 2025 19:15:54 +0000 (0:00:04.449) 0:00:22.419 ********* 2025-05-28 19:23:24.731180 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.731206 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.731218 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.731229 | orchestrator | 2025-05-28 19:23:24.731240 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-28 19:23:24.731251 | orchestrator | Wednesday 28 May 2025 19:15:56 +0000 (0:00:01.890) 0:00:24.309 ********* 2025-05-28 19:23:24.731279 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.731290 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.731302 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.731312 | orchestrator | 2025-05-28 19:23:24.731323 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-28 19:23:24.731341 | orchestrator | Wednesday 28 May 2025 19:15:59 +0000 (0:00:02.893) 0:00:27.203 ********* 2025-05-28 19:23:24.731353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.731366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.731378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.731390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.731410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.731448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.731466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.731478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.731489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.731501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.731512 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.731524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.731542 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.731562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.731574 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.731585 | orchestrator | 2025-05-28 19:23:24.731596 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-28 19:23:24.731607 | orchestrator | Wednesday 28 May 2025 19:16:05 +0000 (0:00:05.469) 0:00:32.672 ********* 2025-05-28 19:23:24.731624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.731642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.731766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.731796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.731816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.731838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.731856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.731868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.731880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.731892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.731904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.731935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.731947 | orchestrator | 2025-05-28 19:23:24.731958 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-28 19:23:24.731970 | orchestrator | Wednesday 28 May 2025 19:16:09 +0000 (0:00:03.841) 0:00:36.513 ********* 2025-05-28 19:23:24.731981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.731998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.732010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.732021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.732033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.732061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.732251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.732339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.732352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.732364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.732376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.732428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.732441 | orchestrator | 2025-05-28 19:23:24.732452 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-28 19:23:24.732464 | orchestrator | Wednesday 28 May 2025 19:16:12 +0000 (0:00:03.496) 0:00:40.010 ********* 2025-05-28 19:23:24.732483 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-28 19:23:24.732496 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-28 19:23:24.732507 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-28 19:23:24.732518 | orchestrator | 2025-05-28 19:23:24.732530 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-28 19:23:24.732541 | orchestrator | Wednesday 28 May 2025 19:16:15 +0000 (0:00:03.093) 0:00:43.103 ********* 2025-05-28 19:23:24.732552 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-28 19:23:24.732563 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-28 19:23:24.732574 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-28 19:23:24.732584 | orchestrator | 2025-05-28 19:23:24.732596 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-28 19:23:24.732607 | orchestrator | Wednesday 28 May 2025 19:16:19 +0000 (0:00:04.315) 0:00:47.419 ********* 2025-05-28 19:23:24.732618 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.732629 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.732640 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.732651 | orchestrator | 2025-05-28 19:23:24.732667 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-28 19:23:24.732679 | orchestrator | Wednesday 28 May 2025 19:16:21 +0000 (0:00:01.236) 0:00:48.656 ********* 2025-05-28 19:23:24.732690 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-28 19:23:24.732703 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-28 19:23:24.732714 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-28 19:23:24.732725 | orchestrator | 2025-05-28 19:23:24.732736 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-28 19:23:24.732747 | orchestrator | Wednesday 28 May 2025 19:16:24 +0000 (0:00:03.390) 0:00:52.047 ********* 2025-05-28 19:23:24.732758 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-28 19:23:24.732769 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-28 19:23:24.732788 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-28 19:23:24.732799 | orchestrator | 2025-05-28 19:23:24.732810 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-28 19:23:24.732821 | orchestrator | Wednesday 28 May 2025 19:16:28 +0000 (0:00:03.650) 0:00:55.697 ********* 2025-05-28 19:23:24.732832 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-28 19:23:24.732842 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-28 19:23:24.732852 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-28 19:23:24.732862 | orchestrator | 2025-05-28 19:23:24.732871 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-28 19:23:24.732881 | orchestrator | Wednesday 28 May 2025 19:16:31 +0000 (0:00:03.081) 0:00:58.779 ********* 2025-05-28 19:23:24.732891 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-28 19:23:24.732901 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-28 19:23:24.732911 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-28 19:23:24.732920 | orchestrator | 2025-05-28 19:23:24.732930 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-28 19:23:24.732940 | orchestrator | Wednesday 28 May 2025 19:16:33 +0000 (0:00:02.252) 0:01:01.032 ********* 2025-05-28 19:23:24.732950 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.732960 | orchestrator | 2025-05-28 19:23:24.732969 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-28 19:23:24.732979 | orchestrator | Wednesday 28 May 2025 19:16:34 +0000 (0:00:00.767) 0:01:01.799 ********* 2025-05-28 19:23:24.732989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.733008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.733024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.733034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.733051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.733062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.733072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.733090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.733101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.733111 | orchestrator | 2025-05-28 19:23:24.733121 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-28 19:23:24.733131 | orchestrator | Wednesday 28 May 2025 19:16:37 +0000 (0:00:03.369) 0:01:05.168 ********* 2025-05-28 19:23:24.733145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.733349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.733363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.733373 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.733384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.733395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.733413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.733424 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.733435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.733456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.733468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.733478 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.733488 | orchestrator | 2025-05-28 19:23:24.733498 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-28 19:23:24.733508 | orchestrator | Wednesday 28 May 2025 19:16:38 +0000 (0:00:01.010) 0:01:06.178 ********* 2025-05-28 19:23:24.733519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.733529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.733545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.733556 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.733566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.733587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.733597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.733608 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.733618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-28 19:23:24.733657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-28 19:23:24.733669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-28 19:23:24.733679 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.733689 | orchestrator | 2025-05-28 19:23:24.733699 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-28 19:23:24.733714 | orchestrator | Wednesday 28 May 2025 19:16:39 +0000 (0:00:01.181) 0:01:07.360 ********* 2025-05-28 19:23:24.733725 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-28 19:23:24.733735 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-28 19:23:24.733755 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-28 19:23:24.733765 | orchestrator | 2025-05-28 19:23:24.733775 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-28 19:23:24.733785 | orchestrator | Wednesday 28 May 2025 19:16:41 +0000 (0:00:01.993) 0:01:09.354 ********* 2025-05-28 19:23:24.733795 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-28 19:23:24.733805 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-28 19:23:24.733814 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-28 19:23:24.733824 | orchestrator | 2025-05-28 19:23:24.733834 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-28 19:23:24.733860 | orchestrator | Wednesday 28 May 2025 19:16:44 +0000 (0:00:02.233) 0:01:11.587 ********* 2025-05-28 19:23:24.733870 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 19:23:24.733884 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 19:23:24.733894 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 19:23:24.733904 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 19:23:24.733914 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.733924 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 19:23:24.733934 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.733943 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 19:23:24.733953 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.733962 | orchestrator | 2025-05-28 19:23:24.733972 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-28 19:23:24.733982 | orchestrator | Wednesday 28 May 2025 19:16:46 +0000 (0:00:01.967) 0:01:13.555 ********* 2025-05-28 19:23:24.733992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.734003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.734013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-28 19:23:24.734085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.734097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.734113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-28 19:23:24.734123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.734134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.734144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.734166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.734177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-28 19:23:24.734228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2', '__omit_place_holder__3116c0dfe4b59bf326b1263c0f75e6d57f360bc2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-28 19:23:24.734240 | orchestrator | 2025-05-28 19:23:24.734250 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-28 19:23:24.734308 | orchestrator | Wednesday 28 May 2025 19:16:49 +0000 (0:00:03.120) 0:01:16.675 ********* 2025-05-28 19:23:24.734320 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.734330 | orchestrator | 2025-05-28 19:23:24.734340 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-28 19:23:24.734350 | orchestrator | Wednesday 28 May 2025 19:16:50 +0000 (0:00:00.982) 0:01:17.658 ********* 2025-05-28 19:23:24.734361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-28 19:23:24.734373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.734391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-28 19:23:24.734437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.734448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-28 19:23:24.734491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.734502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734528 | orchestrator | 2025-05-28 19:23:24.734551 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-28 19:23:24.734562 | orchestrator | Wednesday 28 May 2025 19:16:55 +0000 (0:00:05.144) 0:01:22.803 ********* 2025-05-28 19:23:24.734572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-28 19:23:24.734583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.734599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734648 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.734659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-28 19:23:24.734674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.734685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734711 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.734722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-28 19:23:24.734740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.734751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.734776 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.734786 | orchestrator | 2025-05-28 19:23:24.734796 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-28 19:23:24.734806 | orchestrator | Wednesday 28 May 2025 19:16:56 +0000 (0:00:00.945) 0:01:23.749 ********* 2025-05-28 19:23:24.734816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-28 19:23:24.734828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-28 19:23:24.734836 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.734849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-28 19:23:24.734858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-28 19:23:24.734866 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.734874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-28 19:23:24.734882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-28 19:23:24.734890 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.734898 | orchestrator | 2025-05-28 19:23:24.734906 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-28 19:23:24.734914 | orchestrator | Wednesday 28 May 2025 19:16:57 +0000 (0:00:01.609) 0:01:25.358 ********* 2025-05-28 19:23:24.734922 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.734930 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.734938 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.734946 | orchestrator | 2025-05-28 19:23:24.734954 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-28 19:23:24.734962 | orchestrator | Wednesday 28 May 2025 19:16:59 +0000 (0:00:01.579) 0:01:26.937 ********* 2025-05-28 19:23:24.734970 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.734978 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.734986 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.734994 | orchestrator | 2025-05-28 19:23:24.735002 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-28 19:23:24.735010 | orchestrator | Wednesday 28 May 2025 19:17:01 +0000 (0:00:02.518) 0:01:29.456 ********* 2025-05-28 19:23:24.735018 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.735025 | orchestrator | 2025-05-28 19:23:24.735033 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-28 19:23:24.735041 | orchestrator | Wednesday 28 May 2025 19:17:02 +0000 (0:00:00.959) 0:01:30.415 ********* 2025-05-28 19:23:24.735056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.735073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.735087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.735096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.735106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.735120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.735132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.735146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.735154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.735163 | orchestrator | 2025-05-28 19:23:24.735171 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-28 19:23:24.735179 | orchestrator | Wednesday 28 May 2025 19:17:08 +0000 (0:00:05.594) 0:01:36.010 ********* 2025-05-28 19:23:24.735188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.735203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.736526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.736710 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.736748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.736765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.736777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.736789 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.736801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.736833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.736859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.736870 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.736882 | orchestrator | 2025-05-28 19:23:24.736894 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-28 19:23:24.736907 | orchestrator | Wednesday 28 May 2025 19:17:09 +0000 (0:00:00.871) 0:01:36.882 ********* 2025-05-28 19:23:24.736919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 19:23:24.736931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 19:23:24.736943 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.736962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 19:23:24.736986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 19:23:24.737005 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.737023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 19:23:24.737043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-28 19:23:24.737061 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.737082 | orchestrator | 2025-05-28 19:23:24.737102 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-28 19:23:24.737129 | orchestrator | Wednesday 28 May 2025 19:17:10 +0000 (0:00:01.545) 0:01:38.427 ********* 2025-05-28 19:23:24.737148 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.737169 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.737189 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.737208 | orchestrator | 2025-05-28 19:23:24.737228 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-28 19:23:24.737246 | orchestrator | Wednesday 28 May 2025 19:17:12 +0000 (0:00:01.405) 0:01:39.832 ********* 2025-05-28 19:23:24.737330 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.737350 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.737366 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.737377 | orchestrator | 2025-05-28 19:23:24.737388 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-28 19:23:24.737399 | orchestrator | Wednesday 28 May 2025 19:17:15 +0000 (0:00:03.171) 0:01:43.004 ********* 2025-05-28 19:23:24.737422 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.737433 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.737444 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.737454 | orchestrator | 2025-05-28 19:23:24.737465 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-28 19:23:24.737476 | orchestrator | Wednesday 28 May 2025 19:17:15 +0000 (0:00:00.330) 0:01:43.334 ********* 2025-05-28 19:23:24.737487 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.737498 | orchestrator | 2025-05-28 19:23:24.737509 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-28 19:23:24.737520 | orchestrator | Wednesday 28 May 2025 19:17:16 +0000 (0:00:00.923) 0:01:44.257 ********* 2025-05-28 19:23:24.737552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-28 19:23:24.737567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-28 19:23:24.737579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-28 19:23:24.737591 | orchestrator | 2025-05-28 19:23:24.737603 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-28 19:23:24.737614 | orchestrator | Wednesday 28 May 2025 19:17:20 +0000 (0:00:03.297) 0:01:47.555 ********* 2025-05-28 19:23:24.737626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-28 19:23:24.737645 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.737657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-28 19:23:24.737669 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.737692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-28 19:23:24.737705 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.737716 | orchestrator | 2025-05-28 19:23:24.737727 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-28 19:23:24.737738 | orchestrator | Wednesday 28 May 2025 19:17:21 +0000 (0:00:01.700) 0:01:49.256 ********* 2025-05-28 19:23:24.737751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 19:23:24.737763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 19:23:24.737776 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.737788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 19:23:24.737800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 19:23:24.737817 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.737829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 19:23:24.737841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-28 19:23:24.737852 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.737863 | orchestrator | 2025-05-28 19:23:24.737874 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-28 19:23:24.737885 | orchestrator | Wednesday 28 May 2025 19:17:24 +0000 (0:00:02.477) 0:01:51.733 ********* 2025-05-28 19:23:24.737896 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.737907 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.737918 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.737929 | orchestrator | 2025-05-28 19:23:24.737940 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-28 19:23:24.737957 | orchestrator | Wednesday 28 May 2025 19:17:25 +0000 (0:00:00.863) 0:01:52.597 ********* 2025-05-28 19:23:24.737968 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.737980 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.737991 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.738002 | orchestrator | 2025-05-28 19:23:24.738013 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-28 19:23:24.738099 | orchestrator | Wednesday 28 May 2025 19:17:26 +0000 (0:00:01.315) 0:01:53.913 ********* 2025-05-28 19:23:24.738118 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.738138 | orchestrator | 2025-05-28 19:23:24.738163 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-28 19:23:24.738175 | orchestrator | Wednesday 28 May 2025 19:17:27 +0000 (0:00:01.010) 0:01:54.924 ********* 2025-05-28 19:23:24.738187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.738202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.738364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.738425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738472 | orchestrator | 2025-05-28 19:23:24.738483 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-28 19:23:24.738494 | orchestrator | Wednesday 28 May 2025 19:17:32 +0000 (0:00:05.083) 0:02:00.007 ********* 2025-05-28 19:23:24.738506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.738517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738564 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.738576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.738598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738633 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.738657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.738669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.738710 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.738721 | orchestrator | 2025-05-28 19:23:24.738732 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-28 19:23:24.738743 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:01.130) 0:02:01.138 ********* 2025-05-28 19:23:24.738755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 19:23:24.738766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 19:23:24.738778 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.738789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 19:23:24.738800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 19:23:24.738817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 19:23:24.738829 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.738840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-28 19:23:24.738858 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.738869 | orchestrator | 2025-05-28 19:23:24.738880 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-28 19:23:24.738891 | orchestrator | Wednesday 28 May 2025 19:17:35 +0000 (0:00:01.338) 0:02:02.476 ********* 2025-05-28 19:23:24.738902 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.738913 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.738924 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.738934 | orchestrator | 2025-05-28 19:23:24.738945 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-28 19:23:24.738956 | orchestrator | Wednesday 28 May 2025 19:17:36 +0000 (0:00:01.575) 0:02:04.052 ********* 2025-05-28 19:23:24.738967 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.738978 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.738989 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.739000 | orchestrator | 2025-05-28 19:23:24.739011 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-28 19:23:24.739021 | orchestrator | Wednesday 28 May 2025 19:17:39 +0000 (0:00:02.537) 0:02:06.589 ********* 2025-05-28 19:23:24.739032 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.739043 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.739054 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.739064 | orchestrator | 2025-05-28 19:23:24.739075 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-28 19:23:24.739110 | orchestrator | Wednesday 28 May 2025 19:17:39 +0000 (0:00:00.351) 0:02:06.940 ********* 2025-05-28 19:23:24.739122 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.739133 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.739144 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.739154 | orchestrator | 2025-05-28 19:23:24.739165 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-28 19:23:24.739176 | orchestrator | Wednesday 28 May 2025 19:17:39 +0000 (0:00:00.440) 0:02:07.381 ********* 2025-05-28 19:23:24.739187 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.739198 | orchestrator | 2025-05-28 19:23:24.739209 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-28 19:23:24.739220 | orchestrator | Wednesday 28 May 2025 19:17:41 +0000 (0:00:01.241) 0:02:08.623 ********* 2025-05-28 19:23:24.739232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:23:24.739333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:23:24.739364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:23:24.739473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:23:24.739505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:23:24.739591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:23:24.739608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739673 | orchestrator | 2025-05-28 19:23:24.739684 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-28 19:23:24.739696 | orchestrator | Wednesday 28 May 2025 19:17:46 +0000 (0:00:05.700) 0:02:14.324 ********* 2025-05-28 19:23:24.739715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:23:24.739732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:23:24.739744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739809 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.739832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:23:24.739846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:23:24.739857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739940 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.739952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:23:24.739964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:23:24.739976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.739994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.740006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.740025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.740042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.740054 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.740065 | orchestrator | 2025-05-28 19:23:24.740077 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-28 19:23:24.740088 | orchestrator | Wednesday 28 May 2025 19:17:47 +0000 (0:00:01.017) 0:02:15.341 ********* 2025-05-28 19:23:24.740099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-28 19:23:24.740111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-28 19:23:24.740122 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.740133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-28 19:23:24.740145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-28 19:23:24.740162 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.740173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-28 19:23:24.740185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-28 19:23:24.740196 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.740207 | orchestrator | 2025-05-28 19:23:24.740218 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-28 19:23:24.740229 | orchestrator | Wednesday 28 May 2025 19:17:49 +0000 (0:00:01.498) 0:02:16.839 ********* 2025-05-28 19:23:24.740240 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.740251 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.740294 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.740315 | orchestrator | 2025-05-28 19:23:24.740333 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-28 19:23:24.740351 | orchestrator | Wednesday 28 May 2025 19:17:50 +0000 (0:00:01.200) 0:02:18.040 ********* 2025-05-28 19:23:24.740362 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.740373 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.740384 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.740395 | orchestrator | 2025-05-28 19:23:24.740406 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-28 19:23:24.740416 | orchestrator | Wednesday 28 May 2025 19:17:52 +0000 (0:00:02.363) 0:02:20.404 ********* 2025-05-28 19:23:24.740427 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.740438 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.740449 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.740460 | orchestrator | 2025-05-28 19:23:24.740470 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-28 19:23:24.740481 | orchestrator | Wednesday 28 May 2025 19:17:53 +0000 (0:00:00.485) 0:02:20.889 ********* 2025-05-28 19:23:24.740492 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.740503 | orchestrator | 2025-05-28 19:23:24.740513 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-28 19:23:24.740524 | orchestrator | Wednesday 28 May 2025 19:17:54 +0000 (0:00:01.145) 0:02:22.035 ********* 2025-05-28 19:23:24.740552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:23:24.740575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.740600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:23:24.740615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.740641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:23:24.740659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.740678 | orchestrator | 2025-05-28 19:23:24.740690 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-28 19:23:24.740701 | orchestrator | Wednesday 28 May 2025 19:18:00 +0000 (0:00:05.524) 0:02:27.560 ********* 2025-05-28 19:23:24.740719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:23:24.740737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.740756 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.740769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:23:24.740794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.740813 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.740826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:23:24.740856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.740875 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.740887 | orchestrator | 2025-05-28 19:23:24.740898 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-28 19:23:24.740909 | orchestrator | Wednesday 28 May 2025 19:18:05 +0000 (0:00:05.553) 0:02:33.113 ********* 2025-05-28 19:23:24.740921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 19:23:24.740933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 19:23:24.740944 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.740957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 19:23:24.740968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 19:23:24.740980 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.740991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 19:23:24.741009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-28 19:23:24.741031 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.741043 | orchestrator | 2025-05-28 19:23:24.741054 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-28 19:23:24.741065 | orchestrator | Wednesday 28 May 2025 19:18:12 +0000 (0:00:06.524) 0:02:39.638 ********* 2025-05-28 19:23:24.741076 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.741087 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.741098 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.741109 | orchestrator | 2025-05-28 19:23:24.741120 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-28 19:23:24.741131 | orchestrator | Wednesday 28 May 2025 19:18:13 +0000 (0:00:01.381) 0:02:41.020 ********* 2025-05-28 19:23:24.741142 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.741152 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.741164 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.741175 | orchestrator | 2025-05-28 19:23:24.741185 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-28 19:23:24.741196 | orchestrator | Wednesday 28 May 2025 19:18:15 +0000 (0:00:02.393) 0:02:43.414 ********* 2025-05-28 19:23:24.741207 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.741218 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.741229 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.741240 | orchestrator | 2025-05-28 19:23:24.741251 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-28 19:23:24.741320 | orchestrator | Wednesday 28 May 2025 19:18:16 +0000 (0:00:00.514) 0:02:43.928 ********* 2025-05-28 19:23:24.741333 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.741344 | orchestrator | 2025-05-28 19:23:24.741355 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-28 19:23:24.741366 | orchestrator | Wednesday 28 May 2025 19:18:17 +0000 (0:00:01.287) 0:02:45.216 ********* 2025-05-28 19:23:24.741379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:23:24.741392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:23:24.741404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:23:24.741422 | orchestrator | 2025-05-28 19:23:24.741434 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-28 19:23:24.741444 | orchestrator | Wednesday 28 May 2025 19:18:22 +0000 (0:00:05.098) 0:02:50.315 ********* 2025-05-28 19:23:24.741468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:23:24.741480 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.741492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:23:24.741504 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.741515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:23:24.741527 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.741538 | orchestrator | 2025-05-28 19:23:24.741549 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-28 19:23:24.741560 | orchestrator | Wednesday 28 May 2025 19:18:23 +0000 (0:00:00.673) 0:02:50.988 ********* 2025-05-28 19:23:24.741571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-28 19:23:24.741582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-28 19:23:24.741594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-28 19:23:24.741605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-28 19:23:24.741616 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.741628 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.741645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-28 19:23:24.741656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-28 19:23:24.741667 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.741678 | orchestrator | 2025-05-28 19:23:24.741689 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-28 19:23:24.741700 | orchestrator | Wednesday 28 May 2025 19:18:24 +0000 (0:00:01.064) 0:02:52.052 ********* 2025-05-28 19:23:24.741711 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.741722 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.741733 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.741744 | orchestrator | 2025-05-28 19:23:24.741756 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-28 19:23:24.741767 | orchestrator | Wednesday 28 May 2025 19:18:25 +0000 (0:00:01.390) 0:02:53.443 ********* 2025-05-28 19:23:24.741777 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.741787 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.741797 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.741807 | orchestrator | 2025-05-28 19:23:24.741822 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-28 19:23:24.741833 | orchestrator | Wednesday 28 May 2025 19:18:28 +0000 (0:00:02.445) 0:02:55.889 ********* 2025-05-28 19:23:24.741842 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.741852 | orchestrator | 2025-05-28 19:23:24.741862 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-28 19:23:24.741872 | orchestrator | Wednesday 28 May 2025 19:18:29 +0000 (0:00:01.336) 0:02:57.225 ********* 2025-05-28 19:23:24.741887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.741898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.741909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.741926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.741942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.741957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.741968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.741979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.741994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.742004 | orchestrator | 2025-05-28 19:23:24.742041 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-28 19:23:24.742052 | orchestrator | Wednesday 28 May 2025 19:18:37 +0000 (0:00:07.568) 0:03:04.794 ********* 2025-05-28 19:23:24.742070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.742085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.742096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.742106 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.742117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.742133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.742149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.742160 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.742175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.742186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.742203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.742213 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.742224 | orchestrator | 2025-05-28 19:23:24.742234 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-28 19:23:24.742300 | orchestrator | Wednesday 28 May 2025 19:18:38 +0000 (0:00:00.886) 0:03:05.680 ********* 2025-05-28 19:23:24.742313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742355 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.742365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742413 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.742432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-28 19:23:24.742522 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.742531 | orchestrator | 2025-05-28 19:23:24.742539 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-28 19:23:24.742547 | orchestrator | Wednesday 28 May 2025 19:18:39 +0000 (0:00:01.505) 0:03:07.186 ********* 2025-05-28 19:23:24.742555 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.742563 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.742571 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.742579 | orchestrator | 2025-05-28 19:23:24.742587 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-28 19:23:24.742595 | orchestrator | Wednesday 28 May 2025 19:18:41 +0000 (0:00:01.407) 0:03:08.594 ********* 2025-05-28 19:23:24.742603 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.742611 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.742619 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.742627 | orchestrator | 2025-05-28 19:23:24.742635 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-28 19:23:24.742643 | orchestrator | Wednesday 28 May 2025 19:18:43 +0000 (0:00:02.314) 0:03:10.908 ********* 2025-05-28 19:23:24.742651 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.742659 | orchestrator | 2025-05-28 19:23:24.742667 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-28 19:23:24.742675 | orchestrator | Wednesday 28 May 2025 19:18:44 +0000 (0:00:01.179) 0:03:12.087 ********* 2025-05-28 19:23:24.742691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:23:24.742717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:23:24.742745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:23:24.742759 | orchestrator | 2025-05-28 19:23:24.742768 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-28 19:23:24.742776 | orchestrator | Wednesday 28 May 2025 19:18:48 +0000 (0:00:04.074) 0:03:16.162 ********* 2025-05-28 19:23:24.742785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:23:24.742794 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.742813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:23:24.742827 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.742836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:23:24.742846 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.742854 | orchestrator | 2025-05-28 19:23:24.742862 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-28 19:23:24.742870 | orchestrator | Wednesday 28 May 2025 19:18:49 +0000 (0:00:01.072) 0:03:17.234 ********* 2025-05-28 19:23:24.742879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 19:23:24.742892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 19:23:24.742906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 19:23:24.742921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 19:23:24.742929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-28 19:23:24.742938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 19:23:24.742946 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.742955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 19:23:24.742964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 19:23:24.742972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 19:23:24.742980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-28 19:23:24.742989 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.742997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 19:23:24.743005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 19:23:24.743013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-28 19:23:24.743022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-28 19:23:24.743035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-28 19:23:24.743043 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.743051 | orchestrator | 2025-05-28 19:23:24.743077 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-28 19:23:24.743087 | orchestrator | Wednesday 28 May 2025 19:18:51 +0000 (0:00:01.306) 0:03:18.540 ********* 2025-05-28 19:23:24.743095 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.743103 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.743111 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.743119 | orchestrator | 2025-05-28 19:23:24.743127 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-28 19:23:24.743136 | orchestrator | Wednesday 28 May 2025 19:18:52 +0000 (0:00:01.539) 0:03:20.080 ********* 2025-05-28 19:23:24.743148 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.743205 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.743215 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.743223 | orchestrator | 2025-05-28 19:23:24.743231 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-28 19:23:24.743239 | orchestrator | Wednesday 28 May 2025 19:18:54 +0000 (0:00:02.342) 0:03:22.423 ********* 2025-05-28 19:23:24.743247 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.743269 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.743278 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.743286 | orchestrator | 2025-05-28 19:23:24.743294 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-28 19:23:24.743301 | orchestrator | Wednesday 28 May 2025 19:18:55 +0000 (0:00:00.539) 0:03:22.962 ********* 2025-05-28 19:23:24.743309 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.743317 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.743325 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.743333 | orchestrator | 2025-05-28 19:23:24.743341 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-28 19:23:24.743349 | orchestrator | Wednesday 28 May 2025 19:18:55 +0000 (0:00:00.310) 0:03:23.273 ********* 2025-05-28 19:23:24.743357 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.743365 | orchestrator | 2025-05-28 19:23:24.743373 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-28 19:23:24.743380 | orchestrator | Wednesday 28 May 2025 19:18:57 +0000 (0:00:01.386) 0:03:24.659 ********* 2025-05-28 19:23:24.743389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:23:24.743399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:23:24.743414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:23:24.743443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:23:24.743453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:23:24.743461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:23:24.743471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:23:24.743486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:23:24.743499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:23:24.743508 | orchestrator | 2025-05-28 19:23:24.743516 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-28 19:23:24.743524 | orchestrator | Wednesday 28 May 2025 19:19:02 +0000 (0:00:05.071) 0:03:29.730 ********* 2025-05-28 19:23:24.743540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:23:24.743549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:23:24.743557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:23:24.743571 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.743580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:23:24.743593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:23:24.743606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:23:24.743615 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.743623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:23:24.743632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:23:24.743647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:23:24.743656 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.743664 | orchestrator | 2025-05-28 19:23:24.743672 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-28 19:23:24.743680 | orchestrator | Wednesday 28 May 2025 19:19:03 +0000 (0:00:00.823) 0:03:30.554 ********* 2025-05-28 19:23:24.743689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-28 19:23:24.743698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-28 19:23:24.743706 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.743719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-28 19:23:24.743728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-28 19:23:24.743739 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.743748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-28 19:23:24.743757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-28 19:23:24.743765 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.743773 | orchestrator | 2025-05-28 19:23:24.743781 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-28 19:23:24.743789 | orchestrator | Wednesday 28 May 2025 19:19:04 +0000 (0:00:01.278) 0:03:31.832 ********* 2025-05-28 19:23:24.743797 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.743805 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.743813 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.743822 | orchestrator | 2025-05-28 19:23:24.743830 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-28 19:23:24.743838 | orchestrator | Wednesday 28 May 2025 19:19:05 +0000 (0:00:01.337) 0:03:33.170 ********* 2025-05-28 19:23:24.743846 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.743854 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.743868 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.743876 | orchestrator | 2025-05-28 19:23:24.743884 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-28 19:23:24.743892 | orchestrator | Wednesday 28 May 2025 19:19:08 +0000 (0:00:02.499) 0:03:35.669 ********* 2025-05-28 19:23:24.743900 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.743908 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.743916 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.743924 | orchestrator | 2025-05-28 19:23:24.743932 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-28 19:23:24.743940 | orchestrator | Wednesday 28 May 2025 19:19:08 +0000 (0:00:00.301) 0:03:35.971 ********* 2025-05-28 19:23:24.743948 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.743956 | orchestrator | 2025-05-28 19:23:24.743964 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-28 19:23:24.743972 | orchestrator | Wednesday 28 May 2025 19:19:09 +0000 (0:00:01.313) 0:03:37.284 ********* 2025-05-28 19:23:24.743981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:23:24.743989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:23:24.744017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:23:24.744040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744048 | orchestrator | 2025-05-28 19:23:24.744056 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-28 19:23:24.744065 | orchestrator | Wednesday 28 May 2025 19:19:14 +0000 (0:00:04.433) 0:03:41.717 ********* 2025-05-28 19:23:24.744077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:23:24.744090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744104 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.744112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:23:24.744121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744129 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.744138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:23:24.744151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744159 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.744168 | orchestrator | 2025-05-28 19:23:24.744176 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-28 19:23:24.744184 | orchestrator | Wednesday 28 May 2025 19:19:15 +0000 (0:00:00.821) 0:03:42.539 ********* 2025-05-28 19:23:24.744196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-28 19:23:24.744210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-28 19:23:24.744218 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.744227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-28 19:23:24.744235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-28 19:23:24.744243 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.744251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-28 19:23:24.744276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-28 19:23:24.744284 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.744292 | orchestrator | 2025-05-28 19:23:24.744301 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-28 19:23:24.744309 | orchestrator | Wednesday 28 May 2025 19:19:16 +0000 (0:00:01.525) 0:03:44.065 ********* 2025-05-28 19:23:24.744316 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.744324 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.744332 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.744341 | orchestrator | 2025-05-28 19:23:24.744349 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-28 19:23:24.744357 | orchestrator | Wednesday 28 May 2025 19:19:18 +0000 (0:00:01.396) 0:03:45.461 ********* 2025-05-28 19:23:24.744365 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.744373 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.744381 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.744389 | orchestrator | 2025-05-28 19:23:24.744397 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-28 19:23:24.744405 | orchestrator | Wednesday 28 May 2025 19:19:20 +0000 (0:00:02.425) 0:03:47.887 ********* 2025-05-28 19:23:24.744413 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.744420 | orchestrator | 2025-05-28 19:23:24.744428 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-28 19:23:24.744436 | orchestrator | Wednesday 28 May 2025 19:19:21 +0000 (0:00:01.181) 0:03:49.068 ********* 2025-05-28 19:23:24.744445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-28 19:23:24.744453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-28 19:23:24.744502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-28 19:23:24.744553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744578 | orchestrator | 2025-05-28 19:23:24.744586 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-28 19:23:24.744594 | orchestrator | Wednesday 28 May 2025 19:19:26 +0000 (0:00:04.537) 0:03:53.606 ********* 2025-05-28 19:23:24.744603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-28 19:23:24.744620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744649 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.744658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-28 19:23:24.744667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744702 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.744714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-28 19:23:24.744723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.744755 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.744763 | orchestrator | 2025-05-28 19:23:24.744771 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-28 19:23:24.744780 | orchestrator | Wednesday 28 May 2025 19:19:27 +0000 (0:00:00.998) 0:03:54.604 ********* 2025-05-28 19:23:24.744788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-28 19:23:24.744796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-28 19:23:24.744804 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.744812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-28 19:23:24.744821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-28 19:23:24.744829 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.744841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-28 19:23:24.744851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-28 19:23:24.744859 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.744867 | orchestrator | 2025-05-28 19:23:24.744879 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-28 19:23:24.744887 | orchestrator | Wednesday 28 May 2025 19:19:28 +0000 (0:00:01.389) 0:03:55.994 ********* 2025-05-28 19:23:24.744895 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.744903 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.744911 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.744919 | orchestrator | 2025-05-28 19:23:24.744927 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-28 19:23:24.744935 | orchestrator | Wednesday 28 May 2025 19:19:30 +0000 (0:00:01.641) 0:03:57.636 ********* 2025-05-28 19:23:24.744943 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.744951 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.744959 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.744967 | orchestrator | 2025-05-28 19:23:24.744975 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-28 19:23:24.744983 | orchestrator | Wednesday 28 May 2025 19:19:32 +0000 (0:00:02.576) 0:04:00.212 ********* 2025-05-28 19:23:24.744991 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.744999 | orchestrator | 2025-05-28 19:23:24.745007 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-28 19:23:24.745015 | orchestrator | Wednesday 28 May 2025 19:19:34 +0000 (0:00:01.454) 0:04:01.666 ********* 2025-05-28 19:23:24.745023 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:23:24.745031 | orchestrator | 2025-05-28 19:23:24.745039 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-28 19:23:24.745047 | orchestrator | Wednesday 28 May 2025 19:19:37 +0000 (0:00:03.110) 0:04:04.777 ********* 2025-05-28 19:23:24.745056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 19:23:24.745070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 19:23:24.745079 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.745097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 19:23:24.745112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 19:23:24.745127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 19:23:24.745140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 19:23:24.745149 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.745157 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.745165 | orchestrator | 2025-05-28 19:23:24.745173 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-28 19:23:24.745181 | orchestrator | Wednesday 28 May 2025 19:19:40 +0000 (0:00:03.070) 0:04:07.848 ********* 2025-05-28 19:23:24.745190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 19:23:24.745206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 19:23:24.745215 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.745233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 19:23:24.745243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 19:23:24.745297 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.745308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-28 19:23:24.745323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-28 19:23:24.745332 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.745340 | orchestrator | 2025-05-28 19:23:24.745352 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-28 19:23:24.745360 | orchestrator | Wednesday 28 May 2025 19:19:43 +0000 (0:00:02.971) 0:04:10.820 ********* 2025-05-28 19:23:24.745369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 19:23:24.745377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 19:23:24.745393 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.745402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 19:23:24.745410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 19:23:24.745418 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.745427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 19:23:24.745577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-28 19:23:24.745592 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.745600 | orchestrator | 2025-05-28 19:23:24.745608 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-28 19:23:24.745616 | orchestrator | Wednesday 28 May 2025 19:19:46 +0000 (0:00:03.633) 0:04:14.453 ********* 2025-05-28 19:23:24.745624 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.745632 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.745645 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.745654 | orchestrator | 2025-05-28 19:23:24.745661 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-28 19:23:24.745669 | orchestrator | Wednesday 28 May 2025 19:19:49 +0000 (0:00:02.294) 0:04:16.748 ********* 2025-05-28 19:23:24.745677 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.745691 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.745700 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.745708 | orchestrator | 2025-05-28 19:23:24.745716 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-28 19:23:24.745723 | orchestrator | Wednesday 28 May 2025 19:19:51 +0000 (0:00:01.820) 0:04:18.569 ********* 2025-05-28 19:23:24.745731 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.745739 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.745747 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.745756 | orchestrator | 2025-05-28 19:23:24.745763 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-28 19:23:24.745771 | orchestrator | Wednesday 28 May 2025 19:19:51 +0000 (0:00:00.581) 0:04:19.150 ********* 2025-05-28 19:23:24.745779 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.745787 | orchestrator | 2025-05-28 19:23:24.745795 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-28 19:23:24.745803 | orchestrator | Wednesday 28 May 2025 19:19:53 +0000 (0:00:01.570) 0:04:20.721 ********* 2025-05-28 19:23:24.745812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-28 19:23:24.745821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-28 19:23:24.745828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-28 19:23:24.745835 | orchestrator | 2025-05-28 19:23:24.745842 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-28 19:23:24.745848 | orchestrator | Wednesday 28 May 2025 19:19:55 +0000 (0:00:02.042) 0:04:22.764 ********* 2025-05-28 19:23:24.745863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-28 19:23:24.745876 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.745883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-28 19:23:24.745890 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.745897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-28 19:23:24.745904 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.745911 | orchestrator | 2025-05-28 19:23:24.745918 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-28 19:23:24.745933 | orchestrator | Wednesday 28 May 2025 19:19:55 +0000 (0:00:00.418) 0:04:23.182 ********* 2025-05-28 19:23:24.745941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-28 19:23:24.745948 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.745955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-28 19:23:24.745962 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.745969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-28 19:23:24.745976 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.745983 | orchestrator | 2025-05-28 19:23:24.745990 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-28 19:23:24.745996 | orchestrator | Wednesday 28 May 2025 19:19:56 +0000 (0:00:01.084) 0:04:24.267 ********* 2025-05-28 19:23:24.746003 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.746010 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.746110 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.746118 | orchestrator | 2025-05-28 19:23:24.746126 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-28 19:23:24.746132 | orchestrator | Wednesday 28 May 2025 19:19:57 +0000 (0:00:00.970) 0:04:25.237 ********* 2025-05-28 19:23:24.746139 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.746146 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.746153 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.746160 | orchestrator | 2025-05-28 19:23:24.746166 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-28 19:23:24.746173 | orchestrator | Wednesday 28 May 2025 19:19:59 +0000 (0:00:01.435) 0:04:26.672 ********* 2025-05-28 19:23:24.746180 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.746193 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.746201 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.746208 | orchestrator | 2025-05-28 19:23:24.746215 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-28 19:23:24.746222 | orchestrator | Wednesday 28 May 2025 19:19:59 +0000 (0:00:00.608) 0:04:27.282 ********* 2025-05-28 19:23:24.746228 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.746235 | orchestrator | 2025-05-28 19:23:24.746242 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-28 19:23:24.746253 | orchestrator | Wednesday 28 May 2025 19:20:01 +0000 (0:00:01.733) 0:04:29.016 ********* 2025-05-28 19:23:24.746276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:23:24.746283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:23:24.746323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.746377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.746400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.746427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.746434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:23:24.746456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:23:24.746497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:23:24.746504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.746575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:23:24.746594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.746607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.746659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.746677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.746684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.746720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.746737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.746767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.746779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746791 | orchestrator | 2025-05-28 19:23:24.746803 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-28 19:23:24.746815 | orchestrator | Wednesday 28 May 2025 19:20:06 +0000 (0:00:05.327) 0:04:34.343 ********* 2025-05-28 19:23:24.746832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:23:24.746853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:23:24.746884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:23:24.746953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.746987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:23:24.747026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:23:24.747062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.747193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:23:24.747204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.747241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.747248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.747315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.747351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.747364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.747378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.747524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.747552 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.747560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.747567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:23:24.747574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747589 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.747641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:23:24.747661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:23:24.747668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.747675 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.747682 | orchestrator | 2025-05-28 19:23:24.747689 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-28 19:23:24.747697 | orchestrator | Wednesday 28 May 2025 19:20:08 +0000 (0:00:01.974) 0:04:36.318 ********* 2025-05-28 19:23:24.747704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-28 19:23:24.747712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-28 19:23:24.747719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-28 19:23:24.747726 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.747732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-28 19:23:24.747739 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.747746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-28 19:23:24.747753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-28 19:23:24.747760 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.747766 | orchestrator | 2025-05-28 19:23:24.747773 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-28 19:23:24.747780 | orchestrator | Wednesday 28 May 2025 19:20:11 +0000 (0:00:02.230) 0:04:38.548 ********* 2025-05-28 19:23:24.747786 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.747797 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.747804 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.747811 | orchestrator | 2025-05-28 19:23:24.747817 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-28 19:23:24.747824 | orchestrator | Wednesday 28 May 2025 19:20:12 +0000 (0:00:01.519) 0:04:40.068 ********* 2025-05-28 19:23:24.747830 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.747837 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.747844 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.747850 | orchestrator | 2025-05-28 19:23:24.747857 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-28 19:23:24.747863 | orchestrator | Wednesday 28 May 2025 19:20:15 +0000 (0:00:02.716) 0:04:42.785 ********* 2025-05-28 19:23:24.747870 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.747877 | orchestrator | 2025-05-28 19:23:24.747903 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-28 19:23:24.747911 | orchestrator | Wednesday 28 May 2025 19:20:17 +0000 (0:00:01.676) 0:04:44.462 ********* 2025-05-28 19:23:24.747922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.747930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.747938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.747945 | orchestrator | 2025-05-28 19:23:24.747956 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-28 19:23:24.747963 | orchestrator | Wednesday 28 May 2025 19:20:20 +0000 (0:00:03.851) 0:04:48.313 ********* 2025-05-28 19:23:24.747970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.747977 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.748009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.748018 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.748025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.748032 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.748039 | orchestrator | 2025-05-28 19:23:24.748046 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-28 19:23:24.748053 | orchestrator | Wednesday 28 May 2025 19:20:21 +0000 (0:00:00.777) 0:04:49.090 ********* 2025-05-28 19:23:24.748060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748074 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.748085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748099 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.748106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748120 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.748127 | orchestrator | 2025-05-28 19:23:24.748133 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-28 19:23:24.748140 | orchestrator | Wednesday 28 May 2025 19:20:22 +0000 (0:00:01.227) 0:04:50.318 ********* 2025-05-28 19:23:24.748147 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.748154 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.748160 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.748167 | orchestrator | 2025-05-28 19:23:24.748174 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-28 19:23:24.748181 | orchestrator | Wednesday 28 May 2025 19:20:24 +0000 (0:00:01.516) 0:04:51.834 ********* 2025-05-28 19:23:24.748187 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.748194 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.748201 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.748208 | orchestrator | 2025-05-28 19:23:24.748233 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-28 19:23:24.748241 | orchestrator | Wednesday 28 May 2025 19:20:26 +0000 (0:00:02.528) 0:04:54.363 ********* 2025-05-28 19:23:24.748248 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.748297 | orchestrator | 2025-05-28 19:23:24.748307 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-28 19:23:24.748315 | orchestrator | Wednesday 28 May 2025 19:20:28 +0000 (0:00:01.670) 0:04:56.033 ********* 2025-05-28 19:23:24.748327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.748337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.748361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.748423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748438 | orchestrator | 2025-05-28 19:23:24.748445 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-28 19:23:24.748453 | orchestrator | Wednesday 28 May 2025 19:20:34 +0000 (0:00:05.722) 0:05:01.755 ********* 2025-05-28 19:23:24.748482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.748491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748510 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.748518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.748526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748563 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.748571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.748583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.748598 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.748605 | orchestrator | 2025-05-28 19:23:24.748612 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-28 19:23:24.748619 | orchestrator | Wednesday 28 May 2025 19:20:35 +0000 (0:00:00.850) 0:05:02.605 ********* 2025-05-28 19:23:24.748626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748671 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.748677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748711 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.748718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-28 19:23:24.748744 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.748750 | orchestrator | 2025-05-28 19:23:24.748757 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-28 19:23:24.748763 | orchestrator | Wednesday 28 May 2025 19:20:36 +0000 (0:00:01.425) 0:05:04.031 ********* 2025-05-28 19:23:24.748769 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.748775 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.748782 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.748788 | orchestrator | 2025-05-28 19:23:24.748794 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-28 19:23:24.748800 | orchestrator | Wednesday 28 May 2025 19:20:38 +0000 (0:00:01.443) 0:05:05.475 ********* 2025-05-28 19:23:24.748806 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.748813 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.748819 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.748825 | orchestrator | 2025-05-28 19:23:24.748831 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-28 19:23:24.748838 | orchestrator | Wednesday 28 May 2025 19:20:40 +0000 (0:00:02.545) 0:05:08.020 ********* 2025-05-28 19:23:24.748844 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.748850 | orchestrator | 2025-05-28 19:23:24.748856 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-28 19:23:24.748863 | orchestrator | Wednesday 28 May 2025 19:20:42 +0000 (0:00:01.622) 0:05:09.642 ********* 2025-05-28 19:23:24.748869 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-28 19:23:24.748875 | orchestrator | 2025-05-28 19:23:24.748881 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-28 19:23:24.748887 | orchestrator | Wednesday 28 May 2025 19:20:43 +0000 (0:00:01.588) 0:05:11.230 ********* 2025-05-28 19:23:24.748894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-28 19:23:24.748917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-28 19:23:24.748933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-28 19:23:24.748940 | orchestrator | 2025-05-28 19:23:24.748946 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-28 19:23:24.748953 | orchestrator | Wednesday 28 May 2025 19:20:49 +0000 (0:00:05.554) 0:05:16.785 ********* 2025-05-28 19:23:24.748959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.748966 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.748972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.748979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.748986 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.748992 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.748999 | orchestrator | 2025-05-28 19:23:24.749005 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-28 19:23:24.749011 | orchestrator | Wednesday 28 May 2025 19:20:50 +0000 (0:00:01.546) 0:05:18.332 ********* 2025-05-28 19:23:24.749017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 19:23:24.749024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 19:23:24.749031 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.749037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 19:23:24.749047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 19:23:24.749058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 19:23:24.749065 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.749088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-28 19:23:24.749096 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.749102 | orchestrator | 2025-05-28 19:23:24.749109 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-28 19:23:24.749115 | orchestrator | Wednesday 28 May 2025 19:20:53 +0000 (0:00:02.338) 0:05:20.670 ********* 2025-05-28 19:23:24.749122 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.749128 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.749134 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.749141 | orchestrator | 2025-05-28 19:23:24.749152 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-28 19:23:24.749159 | orchestrator | Wednesday 28 May 2025 19:20:56 +0000 (0:00:03.126) 0:05:23.796 ********* 2025-05-28 19:23:24.749165 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.749172 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.749178 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.749184 | orchestrator | 2025-05-28 19:23:24.749190 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-28 19:23:24.749197 | orchestrator | Wednesday 28 May 2025 19:21:00 +0000 (0:00:03.722) 0:05:27.519 ********* 2025-05-28 19:23:24.749203 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-28 19:23:24.749210 | orchestrator | 2025-05-28 19:23:24.749216 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-28 19:23:24.749223 | orchestrator | Wednesday 28 May 2025 19:21:01 +0000 (0:00:01.463) 0:05:28.983 ********* 2025-05-28 19:23:24.749229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.749236 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.749243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.749249 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.749270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.749284 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.749291 | orchestrator | 2025-05-28 19:23:24.749297 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-28 19:23:24.749304 | orchestrator | Wednesday 28 May 2025 19:21:03 +0000 (0:00:01.770) 0:05:30.753 ********* 2025-05-28 19:23:24.749310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.749317 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.749342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.749349 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.749356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-28 19:23:24.749363 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.749369 | orchestrator | 2025-05-28 19:23:24.749375 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-28 19:23:24.749382 | orchestrator | Wednesday 28 May 2025 19:21:05 +0000 (0:00:01.936) 0:05:32.690 ********* 2025-05-28 19:23:24.749388 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.749395 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.749401 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.749408 | orchestrator | 2025-05-28 19:23:24.749414 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-28 19:23:24.749420 | orchestrator | Wednesday 28 May 2025 19:21:07 +0000 (0:00:02.114) 0:05:34.804 ********* 2025-05-28 19:23:24.749426 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.749433 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.749439 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.749445 | orchestrator | 2025-05-28 19:23:24.749452 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-28 19:23:24.749458 | orchestrator | Wednesday 28 May 2025 19:21:10 +0000 (0:00:03.210) 0:05:38.014 ********* 2025-05-28 19:23:24.749464 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.749471 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.749477 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.749483 | orchestrator | 2025-05-28 19:23:24.749489 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-28 19:23:24.749495 | orchestrator | Wednesday 28 May 2025 19:21:14 +0000 (0:00:04.388) 0:05:42.403 ********* 2025-05-28 19:23:24.749502 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-serialproxy) 2025-05-28 19:23:24.749512 | orchestrator | 2025-05-28 19:23:24.749518 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-28 19:23:24.749524 | orchestrator | Wednesday 28 May 2025 19:21:16 +0000 (0:00:01.453) 0:05:43.856 ********* 2025-05-28 19:23:24.749531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 19:23:24.749537 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.749544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 19:23:24.749550 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.749557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 19:23:24.749563 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.749570 | orchestrator | 2025-05-28 19:23:24.749576 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-28 19:23:24.749582 | orchestrator | Wednesday 28 May 2025 19:21:18 +0000 (0:00:01.660) 0:05:45.517 ********* 2025-05-28 19:23:24.749659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 19:23:24.749675 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.749685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 19:23:24.749692 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.749699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-28 19:23:24.749713 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.749719 | orchestrator | 2025-05-28 19:23:24.749726 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-28 19:23:24.749732 | orchestrator | Wednesday 28 May 2025 19:21:19 +0000 (0:00:01.903) 0:05:47.420 ********* 2025-05-28 19:23:24.749739 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.749745 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.749751 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.749758 | orchestrator | 2025-05-28 19:23:24.749764 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-28 19:23:24.749770 | orchestrator | Wednesday 28 May 2025 19:21:22 +0000 (0:00:02.047) 0:05:49.467 ********* 2025-05-28 19:23:24.749776 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.749783 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.749789 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.749795 | orchestrator | 2025-05-28 19:23:24.749802 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-28 19:23:24.749808 | orchestrator | Wednesday 28 May 2025 19:21:24 +0000 (0:00:02.968) 0:05:52.436 ********* 2025-05-28 19:23:24.749814 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.749821 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.749827 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.749833 | orchestrator | 2025-05-28 19:23:24.749840 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-28 19:23:24.749846 | orchestrator | Wednesday 28 May 2025 19:21:29 +0000 (0:00:04.032) 0:05:56.468 ********* 2025-05-28 19:23:24.749852 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.749859 | orchestrator | 2025-05-28 19:23:24.749865 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-28 19:23:24.749871 | orchestrator | Wednesday 28 May 2025 19:21:31 +0000 (0:00:02.031) 0:05:58.500 ********* 2025-05-28 19:23:24.749878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.749905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 19:23:24.749916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.749928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.749935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.749942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.749948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 19:23:24.749955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.749981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.749994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.750001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.750007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 19:23:24.750035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.750083 | orchestrator | 2025-05-28 19:23:24.750093 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-28 19:23:24.750099 | orchestrator | Wednesday 28 May 2025 19:21:36 +0000 (0:00:05.423) 0:06:03.924 ********* 2025-05-28 19:23:24.750106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.750113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 19:23:24.750119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.750143 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.750171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.750179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 19:23:24.750185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.750205 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.750228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.750244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-28 19:23:24.750251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-28 19:23:24.750281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:23:24.750287 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.750294 | orchestrator | 2025-05-28 19:23:24.750300 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-28 19:23:24.750307 | orchestrator | Wednesday 28 May 2025 19:21:37 +0000 (0:00:00.995) 0:06:04.920 ********* 2025-05-28 19:23:24.750313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 19:23:24.750320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 19:23:24.750326 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.750333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 19:23:24.750339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 19:23:24.750350 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.750356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 19:23:24.750363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-28 19:23:24.750369 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.750375 | orchestrator | 2025-05-28 19:23:24.750399 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-28 19:23:24.750406 | orchestrator | Wednesday 28 May 2025 19:21:38 +0000 (0:00:01.414) 0:06:06.334 ********* 2025-05-28 19:23:24.750413 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.750419 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.750425 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.750432 | orchestrator | 2025-05-28 19:23:24.750438 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-28 19:23:24.750444 | orchestrator | Wednesday 28 May 2025 19:21:40 +0000 (0:00:01.607) 0:06:07.942 ********* 2025-05-28 19:23:24.750450 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.750462 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.750468 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.750475 | orchestrator | 2025-05-28 19:23:24.750481 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-28 19:23:24.750487 | orchestrator | Wednesday 28 May 2025 19:21:42 +0000 (0:00:02.481) 0:06:10.423 ********* 2025-05-28 19:23:24.750493 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.750499 | orchestrator | 2025-05-28 19:23:24.750506 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-28 19:23:24.750512 | orchestrator | Wednesday 28 May 2025 19:21:44 +0000 (0:00:01.607) 0:06:12.031 ********* 2025-05-28 19:23:24.750519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:23:24.750526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:23:24.750537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:23:24.750561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:23:24.750573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:23:24.750582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:23:24.750593 | orchestrator | 2025-05-28 19:23:24.750600 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-28 19:23:24.750606 | orchestrator | Wednesday 28 May 2025 19:21:51 +0000 (0:00:07.154) 0:06:19.185 ********* 2025-05-28 19:23:24.750613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:23:24.750639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:23:24.750648 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.750654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:23:24.750662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:23:24.750673 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.750679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:23:24.750703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:23:24.750714 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.750721 | orchestrator | 2025-05-28 19:23:24.750727 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-28 19:23:24.750734 | orchestrator | Wednesday 28 May 2025 19:21:52 +0000 (0:00:01.185) 0:06:20.371 ********* 2025-05-28 19:23:24.750740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-28 19:23:24.750746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 19:23:24.750753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 19:23:24.750760 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.750766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-28 19:23:24.750772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 19:23:24.750779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 19:23:24.750789 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.750796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-28 19:23:24.750802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 19:23:24.750809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-28 19:23:24.750815 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.750821 | orchestrator | 2025-05-28 19:23:24.750828 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-28 19:23:24.750834 | orchestrator | Wednesday 28 May 2025 19:21:54 +0000 (0:00:01.444) 0:06:21.815 ********* 2025-05-28 19:23:24.750840 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.750846 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.750853 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.750859 | orchestrator | 2025-05-28 19:23:24.750865 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-28 19:23:24.750871 | orchestrator | Wednesday 28 May 2025 19:21:55 +0000 (0:00:00.752) 0:06:22.567 ********* 2025-05-28 19:23:24.750877 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.750884 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.750890 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.750896 | orchestrator | 2025-05-28 19:23:24.750902 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-28 19:23:24.750909 | orchestrator | Wednesday 28 May 2025 19:21:56 +0000 (0:00:01.861) 0:06:24.428 ********* 2025-05-28 19:23:24.750915 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.750921 | orchestrator | 2025-05-28 19:23:24.750927 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-28 19:23:24.750933 | orchestrator | Wednesday 28 May 2025 19:21:58 +0000 (0:00:01.913) 0:06:26.342 ********* 2025-05-28 19:23:24.750959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 19:23:24.750968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:23:24.750975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.750986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.750993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 19:23:24.751007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:23:24.751030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 19:23:24.751065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:23:24.751072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 19:23:24.751123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:23:24.751130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 19:23:24.751190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:23:24.751197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 19:23:24.751241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:23:24.751248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751287 | orchestrator | 2025-05-28 19:23:24.751298 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-28 19:23:24.751304 | orchestrator | Wednesday 28 May 2025 19:22:04 +0000 (0:00:05.451) 0:06:31.793 ********* 2025-05-28 19:23:24.751318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:23:24.751325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:23:24.751332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:23:24.751371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:23:24.751379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:23:24.751405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751412 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.751425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:23:24.751435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:23:24.751462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:23:24.751476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751506 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.751512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:23:24.751519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:23:24.751526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:23:24.751565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:23:24.751572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:23:24.751602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:23:24.751610 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.751616 | orchestrator | 2025-05-28 19:23:24.751623 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-28 19:23:24.751629 | orchestrator | Wednesday 28 May 2025 19:22:05 +0000 (0:00:01.384) 0:06:33.178 ********* 2025-05-28 19:23:24.751636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-28 19:23:24.751642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-28 19:23:24.751649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 19:23:24.751656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 19:23:24.751663 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.751669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-28 19:23:24.751676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-28 19:23:24.751683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 19:23:24.751690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 19:23:24.751702 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.751708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-28 19:23:24.751714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-28 19:23:24.751721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 19:23:24.751727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-28 19:23:24.751734 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.751740 | orchestrator | 2025-05-28 19:23:24.751747 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-28 19:23:24.751753 | orchestrator | Wednesday 28 May 2025 19:22:07 +0000 (0:00:01.655) 0:06:34.834 ********* 2025-05-28 19:23:24.751763 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.751769 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.751776 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.751782 | orchestrator | 2025-05-28 19:23:24.751788 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-28 19:23:24.751794 | orchestrator | Wednesday 28 May 2025 19:22:08 +0000 (0:00:01.035) 0:06:35.870 ********* 2025-05-28 19:23:24.751801 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.751807 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.751813 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.751820 | orchestrator | 2025-05-28 19:23:24.751829 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-28 19:23:24.751835 | orchestrator | Wednesday 28 May 2025 19:22:10 +0000 (0:00:01.872) 0:06:37.742 ********* 2025-05-28 19:23:24.751841 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.751848 | orchestrator | 2025-05-28 19:23:24.751854 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-28 19:23:24.751860 | orchestrator | Wednesday 28 May 2025 19:22:11 +0000 (0:00:01.631) 0:06:39.374 ********* 2025-05-28 19:23:24.751867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:23:24.751874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:23:24.751897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-28 19:23:24.751904 | orchestrator | 2025-05-28 19:23:24.751911 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-28 19:23:24.751917 | orchestrator | Wednesday 28 May 2025 19:22:15 +0000 (0:00:03.117) 0:06:42.491 ********* 2025-05-28 19:23:24.751930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-28 19:23:24.751938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-28 19:23:24.751949 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.751956 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.751962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-28 19:23:24.751969 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.751976 | orchestrator | 2025-05-28 19:23:24.751982 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-28 19:23:24.751988 | orchestrator | Wednesday 28 May 2025 19:22:15 +0000 (0:00:00.713) 0:06:43.205 ********* 2025-05-28 19:23:24.751995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-28 19:23:24.752001 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-28 19:23:24.752014 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-28 19:23:24.752026 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752033 | orchestrator | 2025-05-28 19:23:24.752043 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-28 19:23:24.752054 | orchestrator | Wednesday 28 May 2025 19:22:16 +0000 (0:00:00.836) 0:06:44.042 ********* 2025-05-28 19:23:24.752065 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752076 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752086 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752096 | orchestrator | 2025-05-28 19:23:24.752106 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-28 19:23:24.752116 | orchestrator | Wednesday 28 May 2025 19:22:17 +0000 (0:00:00.744) 0:06:44.787 ********* 2025-05-28 19:23:24.752122 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752129 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752135 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752141 | orchestrator | 2025-05-28 19:23:24.752147 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-28 19:23:24.752154 | orchestrator | Wednesday 28 May 2025 19:22:19 +0000 (0:00:01.832) 0:06:46.620 ********* 2025-05-28 19:23:24.752160 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:23:24.752166 | orchestrator | 2025-05-28 19:23:24.752176 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-28 19:23:24.752182 | orchestrator | Wednesday 28 May 2025 19:22:21 +0000 (0:00:02.018) 0:06:48.638 ********* 2025-05-28 19:23:24.752189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.752201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.752208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.752218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.752235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.752254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-28 19:23:24.752350 | orchestrator | 2025-05-28 19:23:24.752357 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-28 19:23:24.752363 | orchestrator | Wednesday 28 May 2025 19:22:29 +0000 (0:00:08.484) 0:06:57.122 ********* 2025-05-28 19:23:24.752370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.752382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.752389 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.752411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.752418 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.752431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-28 19:23:24.752438 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752444 | orchestrator | 2025-05-28 19:23:24.752450 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-28 19:23:24.752460 | orchestrator | Wednesday 28 May 2025 19:22:30 +0000 (0:00:01.115) 0:06:58.237 ********* 2025-05-28 19:23:24.752467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752503 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752536 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-28 19:23:24.752567 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752573 | orchestrator | 2025-05-28 19:23:24.752580 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-28 19:23:24.752586 | orchestrator | Wednesday 28 May 2025 19:22:32 +0000 (0:00:01.796) 0:07:00.034 ********* 2025-05-28 19:23:24.752592 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.752599 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.752605 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.752611 | orchestrator | 2025-05-28 19:23:24.752617 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-28 19:23:24.752624 | orchestrator | Wednesday 28 May 2025 19:22:34 +0000 (0:00:01.511) 0:07:01.546 ********* 2025-05-28 19:23:24.752630 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.752636 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.752642 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.752649 | orchestrator | 2025-05-28 19:23:24.752655 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-28 19:23:24.752661 | orchestrator | Wednesday 28 May 2025 19:22:36 +0000 (0:00:02.562) 0:07:04.108 ********* 2025-05-28 19:23:24.752672 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752678 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752684 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752691 | orchestrator | 2025-05-28 19:23:24.752697 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-28 19:23:24.752703 | orchestrator | Wednesday 28 May 2025 19:22:36 +0000 (0:00:00.315) 0:07:04.424 ********* 2025-05-28 19:23:24.752709 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752715 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752721 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752727 | orchestrator | 2025-05-28 19:23:24.752734 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-28 19:23:24.752743 | orchestrator | Wednesday 28 May 2025 19:22:37 +0000 (0:00:00.584) 0:07:05.009 ********* 2025-05-28 19:23:24.752750 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752756 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752762 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752768 | orchestrator | 2025-05-28 19:23:24.752775 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-28 19:23:24.752781 | orchestrator | Wednesday 28 May 2025 19:22:38 +0000 (0:00:00.592) 0:07:05.602 ********* 2025-05-28 19:23:24.752787 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752793 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752799 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752805 | orchestrator | 2025-05-28 19:23:24.752814 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-28 19:23:24.752821 | orchestrator | Wednesday 28 May 2025 19:22:38 +0000 (0:00:00.322) 0:07:05.925 ********* 2025-05-28 19:23:24.752827 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752833 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752838 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752843 | orchestrator | 2025-05-28 19:23:24.752849 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-28 19:23:24.752854 | orchestrator | Wednesday 28 May 2025 19:22:39 +0000 (0:00:00.780) 0:07:06.705 ********* 2025-05-28 19:23:24.752860 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.752866 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.752871 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.752877 | orchestrator | 2025-05-28 19:23:24.752882 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-28 19:23:24.752887 | orchestrator | Wednesday 28 May 2025 19:22:40 +0000 (0:00:01.048) 0:07:07.753 ********* 2025-05-28 19:23:24.752893 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.752898 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.752904 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.752909 | orchestrator | 2025-05-28 19:23:24.752915 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-28 19:23:24.752921 | orchestrator | Wednesday 28 May 2025 19:22:40 +0000 (0:00:00.701) 0:07:08.455 ********* 2025-05-28 19:23:24.752926 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.752932 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.752937 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.752943 | orchestrator | 2025-05-28 19:23:24.752949 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-28 19:23:24.752954 | orchestrator | Wednesday 28 May 2025 19:22:41 +0000 (0:00:00.635) 0:07:09.090 ********* 2025-05-28 19:23:24.752960 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.752965 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.752971 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.752976 | orchestrator | 2025-05-28 19:23:24.752981 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-28 19:23:24.752987 | orchestrator | Wednesday 28 May 2025 19:22:42 +0000 (0:00:01.339) 0:07:10.430 ********* 2025-05-28 19:23:24.752998 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.753004 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.753009 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.753015 | orchestrator | 2025-05-28 19:23:24.753020 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-28 19:23:24.753026 | orchestrator | Wednesday 28 May 2025 19:22:44 +0000 (0:00:01.269) 0:07:11.699 ********* 2025-05-28 19:23:24.753031 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.753036 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.753042 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.753047 | orchestrator | 2025-05-28 19:23:24.753053 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-28 19:23:24.753058 | orchestrator | Wednesday 28 May 2025 19:22:45 +0000 (0:00:01.011) 0:07:12.711 ********* 2025-05-28 19:23:24.753064 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.753069 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.753075 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.753080 | orchestrator | 2025-05-28 19:23:24.753086 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-28 19:23:24.753091 | orchestrator | Wednesday 28 May 2025 19:22:53 +0000 (0:00:08.670) 0:07:21.381 ********* 2025-05-28 19:23:24.753097 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.753102 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.753107 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.753113 | orchestrator | 2025-05-28 19:23:24.753118 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-28 19:23:24.753124 | orchestrator | Wednesday 28 May 2025 19:22:55 +0000 (0:00:01.085) 0:07:22.467 ********* 2025-05-28 19:23:24.753129 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.753134 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.753140 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.753145 | orchestrator | 2025-05-28 19:23:24.753151 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-28 19:23:24.753156 | orchestrator | Wednesday 28 May 2025 19:23:06 +0000 (0:00:11.040) 0:07:33.508 ********* 2025-05-28 19:23:24.753162 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.753167 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.753172 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.753178 | orchestrator | 2025-05-28 19:23:24.753183 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-28 19:23:24.753189 | orchestrator | Wednesday 28 May 2025 19:23:07 +0000 (0:00:01.048) 0:07:34.557 ********* 2025-05-28 19:23:24.753194 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:23:24.753200 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:23:24.753205 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:23:24.753210 | orchestrator | 2025-05-28 19:23:24.753245 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-28 19:23:24.753252 | orchestrator | Wednesday 28 May 2025 19:23:11 +0000 (0:00:04.715) 0:07:39.272 ********* 2025-05-28 19:23:24.753274 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.753280 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.753286 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.753291 | orchestrator | 2025-05-28 19:23:24.753297 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-28 19:23:24.753302 | orchestrator | Wednesday 28 May 2025 19:23:12 +0000 (0:00:00.610) 0:07:39.883 ********* 2025-05-28 19:23:24.753308 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.753318 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.753324 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.753329 | orchestrator | 2025-05-28 19:23:24.753335 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-28 19:23:24.753340 | orchestrator | Wednesday 28 May 2025 19:23:12 +0000 (0:00:00.367) 0:07:40.251 ********* 2025-05-28 19:23:24.753346 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.753351 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.753362 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.753368 | orchestrator | 2025-05-28 19:23:24.753374 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-28 19:23:24.753382 | orchestrator | Wednesday 28 May 2025 19:23:13 +0000 (0:00:00.614) 0:07:40.866 ********* 2025-05-28 19:23:24.753388 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.753393 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.753399 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.753405 | orchestrator | 2025-05-28 19:23:24.753411 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-28 19:23:24.753416 | orchestrator | Wednesday 28 May 2025 19:23:13 +0000 (0:00:00.591) 0:07:41.457 ********* 2025-05-28 19:23:24.753422 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.753427 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.753433 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.753439 | orchestrator | 2025-05-28 19:23:24.753444 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-28 19:23:24.753450 | orchestrator | Wednesday 28 May 2025 19:23:14 +0000 (0:00:00.650) 0:07:42.108 ********* 2025-05-28 19:23:24.753455 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:23:24.753461 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:23:24.753466 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:23:24.753472 | orchestrator | 2025-05-28 19:23:24.753477 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-28 19:23:24.753483 | orchestrator | Wednesday 28 May 2025 19:23:15 +0000 (0:00:00.387) 0:07:42.495 ********* 2025-05-28 19:23:24.753488 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.753494 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.753500 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.753505 | orchestrator | 2025-05-28 19:23:24.753511 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-28 19:23:24.753516 | orchestrator | Wednesday 28 May 2025 19:23:20 +0000 (0:00:05.100) 0:07:47.595 ********* 2025-05-28 19:23:24.753522 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:23:24.753527 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:23:24.753533 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:23:24.753539 | orchestrator | 2025-05-28 19:23:24.753545 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:23:24.753550 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-28 19:23:24.753556 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-28 19:23:24.753562 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-28 19:23:24.753567 | orchestrator | 2025-05-28 19:23:24.753573 | orchestrator | 2025-05-28 19:23:24.753578 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:23:24.753584 | orchestrator | Wednesday 28 May 2025 19:23:21 +0000 (0:00:01.197) 0:07:48.793 ********* 2025-05-28 19:23:24.753589 | orchestrator | =============================================================================== 2025-05-28 19:23:24.753595 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.04s 2025-05-28 19:23:24.753600 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.67s 2025-05-28 19:23:24.753606 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.48s 2025-05-28 19:23:24.753611 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.57s 2025-05-28 19:23:24.753617 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.15s 2025-05-28 19:23:24.753623 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.52s 2025-05-28 19:23:24.753628 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.72s 2025-05-28 19:23:24.753638 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.70s 2025-05-28 19:23:24.753644 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.59s 2025-05-28 19:23:24.753649 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.55s 2025-05-28 19:23:24.753655 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.55s 2025-05-28 19:23:24.753660 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.52s 2025-05-28 19:23:24.753666 | orchestrator | loadbalancer : Removing checks for services which are disabled ---------- 5.47s 2025-05-28 19:23:24.753671 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.45s 2025-05-28 19:23:24.753676 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.42s 2025-05-28 19:23:24.753682 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.33s 2025-05-28 19:23:24.753687 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.14s 2025-05-28 19:23:24.753693 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.10s 2025-05-28 19:23:24.753698 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 5.10s 2025-05-28 19:23:24.753704 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.08s 2025-05-28 19:23:24.753713 | orchestrator | 2025-05-28 19:23:24 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:24.753719 | orchestrator | 2025-05-28 19:23:24 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:24.753725 | orchestrator | 2025-05-28 19:23:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:24.753730 | orchestrator | 2025-05-28 19:23:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:27.786888 | orchestrator | 2025-05-28 19:23:27 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:27.789496 | orchestrator | 2025-05-28 19:23:27 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:27.792114 | orchestrator | 2025-05-28 19:23:27 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:27.793469 | orchestrator | 2025-05-28 19:23:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:27.793810 | orchestrator | 2025-05-28 19:23:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:30.832589 | orchestrator | 2025-05-28 19:23:30 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:30.832895 | orchestrator | 2025-05-28 19:23:30 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:30.834782 | orchestrator | 2025-05-28 19:23:30 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:30.835570 | orchestrator | 2025-05-28 19:23:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:30.835592 | orchestrator | 2025-05-28 19:23:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:33.872516 | orchestrator | 2025-05-28 19:23:33 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:33.872928 | orchestrator | 2025-05-28 19:23:33 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:33.875976 | orchestrator | 2025-05-28 19:23:33 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:33.877702 | orchestrator | 2025-05-28 19:23:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:33.877756 | orchestrator | 2025-05-28 19:23:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:36.918428 | orchestrator | 2025-05-28 19:23:36 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:36.918775 | orchestrator | 2025-05-28 19:23:36 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:36.919514 | orchestrator | 2025-05-28 19:23:36 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:36.920366 | orchestrator | 2025-05-28 19:23:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:36.920394 | orchestrator | 2025-05-28 19:23:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:39.961905 | orchestrator | 2025-05-28 19:23:39 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:39.962075 | orchestrator | 2025-05-28 19:23:39 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:39.962854 | orchestrator | 2025-05-28 19:23:39 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:39.963482 | orchestrator | 2025-05-28 19:23:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:39.963502 | orchestrator | 2025-05-28 19:23:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:43.010121 | orchestrator | 2025-05-28 19:23:43 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:43.014974 | orchestrator | 2025-05-28 19:23:43 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:43.019011 | orchestrator | 2025-05-28 19:23:43 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:43.020007 | orchestrator | 2025-05-28 19:23:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:43.020291 | orchestrator | 2025-05-28 19:23:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:46.056882 | orchestrator | 2025-05-28 19:23:46 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:46.057440 | orchestrator | 2025-05-28 19:23:46 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:46.058230 | orchestrator | 2025-05-28 19:23:46 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:46.059016 | orchestrator | 2025-05-28 19:23:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:46.059058 | orchestrator | 2025-05-28 19:23:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:49.100391 | orchestrator | 2025-05-28 19:23:49 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:49.101710 | orchestrator | 2025-05-28 19:23:49 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:49.103200 | orchestrator | 2025-05-28 19:23:49 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:49.105279 | orchestrator | 2025-05-28 19:23:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:49.105308 | orchestrator | 2025-05-28 19:23:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:52.139832 | orchestrator | 2025-05-28 19:23:52 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:52.139942 | orchestrator | 2025-05-28 19:23:52 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:52.140369 | orchestrator | 2025-05-28 19:23:52 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:52.144684 | orchestrator | 2025-05-28 19:23:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:52.144780 | orchestrator | 2025-05-28 19:23:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:55.213682 | orchestrator | 2025-05-28 19:23:55 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:55.213774 | orchestrator | 2025-05-28 19:23:55 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:55.215521 | orchestrator | 2025-05-28 19:23:55 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:55.219527 | orchestrator | 2025-05-28 19:23:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:55.220518 | orchestrator | 2025-05-28 19:23:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:23:58.297118 | orchestrator | 2025-05-28 19:23:58 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:23:58.298796 | orchestrator | 2025-05-28 19:23:58 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:23:58.300175 | orchestrator | 2025-05-28 19:23:58 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:23:58.301473 | orchestrator | 2025-05-28 19:23:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:23:58.301495 | orchestrator | 2025-05-28 19:23:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:01.362654 | orchestrator | 2025-05-28 19:24:01 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:01.364984 | orchestrator | 2025-05-28 19:24:01 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:01.366965 | orchestrator | 2025-05-28 19:24:01 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:01.369900 | orchestrator | 2025-05-28 19:24:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:01.369925 | orchestrator | 2025-05-28 19:24:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:04.427633 | orchestrator | 2025-05-28 19:24:04 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:04.427989 | orchestrator | 2025-05-28 19:24:04 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:04.429062 | orchestrator | 2025-05-28 19:24:04 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:04.429209 | orchestrator | 2025-05-28 19:24:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:04.429215 | orchestrator | 2025-05-28 19:24:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:07.482070 | orchestrator | 2025-05-28 19:24:07 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:07.482178 | orchestrator | 2025-05-28 19:24:07 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:07.483085 | orchestrator | 2025-05-28 19:24:07 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:07.484098 | orchestrator | 2025-05-28 19:24:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:07.484489 | orchestrator | 2025-05-28 19:24:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:10.537781 | orchestrator | 2025-05-28 19:24:10 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:10.538293 | orchestrator | 2025-05-28 19:24:10 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:10.539962 | orchestrator | 2025-05-28 19:24:10 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:10.541999 | orchestrator | 2025-05-28 19:24:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:10.542079 | orchestrator | 2025-05-28 19:24:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:13.589775 | orchestrator | 2025-05-28 19:24:13 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:13.590491 | orchestrator | 2025-05-28 19:24:13 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:13.592439 | orchestrator | 2025-05-28 19:24:13 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:13.593214 | orchestrator | 2025-05-28 19:24:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:13.593274 | orchestrator | 2025-05-28 19:24:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:16.652124 | orchestrator | 2025-05-28 19:24:16 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:16.654350 | orchestrator | 2025-05-28 19:24:16 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:16.657071 | orchestrator | 2025-05-28 19:24:16 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:16.659157 | orchestrator | 2025-05-28 19:24:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:16.659213 | orchestrator | 2025-05-28 19:24:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:19.711580 | orchestrator | 2025-05-28 19:24:19 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:19.713263 | orchestrator | 2025-05-28 19:24:19 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:19.715044 | orchestrator | 2025-05-28 19:24:19 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:19.716678 | orchestrator | 2025-05-28 19:24:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:19.716699 | orchestrator | 2025-05-28 19:24:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:22.755939 | orchestrator | 2025-05-28 19:24:22 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:22.756944 | orchestrator | 2025-05-28 19:24:22 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:22.758567 | orchestrator | 2025-05-28 19:24:22 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:22.760123 | orchestrator | 2025-05-28 19:24:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:22.760144 | orchestrator | 2025-05-28 19:24:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:25.816670 | orchestrator | 2025-05-28 19:24:25 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:25.816761 | orchestrator | 2025-05-28 19:24:25 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:25.817891 | orchestrator | 2025-05-28 19:24:25 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:25.819090 | orchestrator | 2025-05-28 19:24:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:25.819106 | orchestrator | 2025-05-28 19:24:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:28.868340 | orchestrator | 2025-05-28 19:24:28 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:28.868448 | orchestrator | 2025-05-28 19:24:28 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:28.868464 | orchestrator | 2025-05-28 19:24:28 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:28.868476 | orchestrator | 2025-05-28 19:24:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:28.868488 | orchestrator | 2025-05-28 19:24:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:31.923109 | orchestrator | 2025-05-28 19:24:31 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:31.926527 | orchestrator | 2025-05-28 19:24:31 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:31.928096 | orchestrator | 2025-05-28 19:24:31 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:31.930532 | orchestrator | 2025-05-28 19:24:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:31.930757 | orchestrator | 2025-05-28 19:24:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:34.990280 | orchestrator | 2025-05-28 19:24:34 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:34.992172 | orchestrator | 2025-05-28 19:24:34 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:34.994951 | orchestrator | 2025-05-28 19:24:34 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:34.997744 | orchestrator | 2025-05-28 19:24:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:34.998121 | orchestrator | 2025-05-28 19:24:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:38.059181 | orchestrator | 2025-05-28 19:24:38 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:38.060414 | orchestrator | 2025-05-28 19:24:38 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:38.062549 | orchestrator | 2025-05-28 19:24:38 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:38.064067 | orchestrator | 2025-05-28 19:24:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:38.065672 | orchestrator | 2025-05-28 19:24:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:41.109596 | orchestrator | 2025-05-28 19:24:41 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:41.110947 | orchestrator | 2025-05-28 19:24:41 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:41.113223 | orchestrator | 2025-05-28 19:24:41 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:41.115148 | orchestrator | 2025-05-28 19:24:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:41.115184 | orchestrator | 2025-05-28 19:24:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:44.166102 | orchestrator | 2025-05-28 19:24:44 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:44.168259 | orchestrator | 2025-05-28 19:24:44 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:44.170816 | orchestrator | 2025-05-28 19:24:44 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:44.173717 | orchestrator | 2025-05-28 19:24:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:44.173788 | orchestrator | 2025-05-28 19:24:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:47.233680 | orchestrator | 2025-05-28 19:24:47 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:47.235034 | orchestrator | 2025-05-28 19:24:47 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:47.235696 | orchestrator | 2025-05-28 19:24:47 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:47.237003 | orchestrator | 2025-05-28 19:24:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:47.237206 | orchestrator | 2025-05-28 19:24:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:50.289055 | orchestrator | 2025-05-28 19:24:50 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:50.289282 | orchestrator | 2025-05-28 19:24:50 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:50.289737 | orchestrator | 2025-05-28 19:24:50 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:50.290407 | orchestrator | 2025-05-28 19:24:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:50.290427 | orchestrator | 2025-05-28 19:24:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:53.348841 | orchestrator | 2025-05-28 19:24:53 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:53.348950 | orchestrator | 2025-05-28 19:24:53 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:53.348966 | orchestrator | 2025-05-28 19:24:53 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:53.349921 | orchestrator | 2025-05-28 19:24:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:53.349944 | orchestrator | 2025-05-28 19:24:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:56.406236 | orchestrator | 2025-05-28 19:24:56 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:56.406343 | orchestrator | 2025-05-28 19:24:56 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:56.406671 | orchestrator | 2025-05-28 19:24:56 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:56.407692 | orchestrator | 2025-05-28 19:24:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:56.407720 | orchestrator | 2025-05-28 19:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:24:59.475599 | orchestrator | 2025-05-28 19:24:59 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:24:59.478218 | orchestrator | 2025-05-28 19:24:59 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:24:59.480633 | orchestrator | 2025-05-28 19:24:59 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:24:59.482012 | orchestrator | 2025-05-28 19:24:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:24:59.482600 | orchestrator | 2025-05-28 19:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:02.545548 | orchestrator | 2025-05-28 19:25:02 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:02.548868 | orchestrator | 2025-05-28 19:25:02 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:02.550704 | orchestrator | 2025-05-28 19:25:02 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:02.553137 | orchestrator | 2025-05-28 19:25:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:02.553484 | orchestrator | 2025-05-28 19:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:05.598317 | orchestrator | 2025-05-28 19:25:05 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:05.598866 | orchestrator | 2025-05-28 19:25:05 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:05.600574 | orchestrator | 2025-05-28 19:25:05 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:05.601537 | orchestrator | 2025-05-28 19:25:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:05.601729 | orchestrator | 2025-05-28 19:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:08.651434 | orchestrator | 2025-05-28 19:25:08 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:08.653307 | orchestrator | 2025-05-28 19:25:08 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:08.654009 | orchestrator | 2025-05-28 19:25:08 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:08.656540 | orchestrator | 2025-05-28 19:25:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:08.657169 | orchestrator | 2025-05-28 19:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:11.708002 | orchestrator | 2025-05-28 19:25:11 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:11.709071 | orchestrator | 2025-05-28 19:25:11 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:11.711580 | orchestrator | 2025-05-28 19:25:11 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:11.714964 | orchestrator | 2025-05-28 19:25:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:11.715756 | orchestrator | 2025-05-28 19:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:14.763658 | orchestrator | 2025-05-28 19:25:14 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:14.766251 | orchestrator | 2025-05-28 19:25:14 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:14.768794 | orchestrator | 2025-05-28 19:25:14 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:14.770829 | orchestrator | 2025-05-28 19:25:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:14.771256 | orchestrator | 2025-05-28 19:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:17.831249 | orchestrator | 2025-05-28 19:25:17 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:17.832268 | orchestrator | 2025-05-28 19:25:17 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:17.833540 | orchestrator | 2025-05-28 19:25:17 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:17.835301 | orchestrator | 2025-05-28 19:25:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:17.835439 | orchestrator | 2025-05-28 19:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:20.885011 | orchestrator | 2025-05-28 19:25:20 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:20.886916 | orchestrator | 2025-05-28 19:25:20 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:20.888678 | orchestrator | 2025-05-28 19:25:20 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:20.890312 | orchestrator | 2025-05-28 19:25:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:20.890357 | orchestrator | 2025-05-28 19:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:23.938663 | orchestrator | 2025-05-28 19:25:23 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:23.940248 | orchestrator | 2025-05-28 19:25:23 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:23.942795 | orchestrator | 2025-05-28 19:25:23 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:23.943863 | orchestrator | 2025-05-28 19:25:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:23.944126 | orchestrator | 2025-05-28 19:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:26.994328 | orchestrator | 2025-05-28 19:25:26 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:26.995024 | orchestrator | 2025-05-28 19:25:26 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:26.995990 | orchestrator | 2025-05-28 19:25:26 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:26.996902 | orchestrator | 2025-05-28 19:25:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:26.996926 | orchestrator | 2025-05-28 19:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:30.039348 | orchestrator | 2025-05-28 19:25:30 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:30.039814 | orchestrator | 2025-05-28 19:25:30 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:30.042223 | orchestrator | 2025-05-28 19:25:30 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:30.043648 | orchestrator | 2025-05-28 19:25:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:30.043671 | orchestrator | 2025-05-28 19:25:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:33.089136 | orchestrator | 2025-05-28 19:25:33 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:33.089230 | orchestrator | 2025-05-28 19:25:33 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:33.091255 | orchestrator | 2025-05-28 19:25:33 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:33.091420 | orchestrator | 2025-05-28 19:25:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:33.091438 | orchestrator | 2025-05-28 19:25:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:36.139375 | orchestrator | 2025-05-28 19:25:36 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:36.140375 | orchestrator | 2025-05-28 19:25:36 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:36.143769 | orchestrator | 2025-05-28 19:25:36 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:36.145802 | orchestrator | 2025-05-28 19:25:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:36.145918 | orchestrator | 2025-05-28 19:25:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:39.201121 | orchestrator | 2025-05-28 19:25:39 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state STARTED 2025-05-28 19:25:39.202656 | orchestrator | 2025-05-28 19:25:39 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:39.204184 | orchestrator | 2025-05-28 19:25:39 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:39.205694 | orchestrator | 2025-05-28 19:25:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:39.205791 | orchestrator | 2025-05-28 19:25:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:42.257695 | orchestrator | 2025-05-28 19:25:42 | INFO  | Task fc661ab2-da4d-44bc-9a08-a7d94488992a is in state SUCCESS 2025-05-28 19:25:42.258770 | orchestrator | 2025-05-28 19:25:42.258813 | orchestrator | 2025-05-28 19:25:42.258827 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:25:42.258839 | orchestrator | 2025-05-28 19:25:42.258946 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:25:42.258961 | orchestrator | Wednesday 28 May 2025 19:23:25 +0000 (0:00:00.392) 0:00:00.392 ********* 2025-05-28 19:25:42.258973 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:25:42.258985 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:25:42.258996 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:25:42.259008 | orchestrator | 2025-05-28 19:25:42.259020 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:25:42.259032 | orchestrator | Wednesday 28 May 2025 19:23:25 +0000 (0:00:00.431) 0:00:00.823 ********* 2025-05-28 19:25:42.259044 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-28 19:25:42.259055 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-28 19:25:42.259066 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-28 19:25:42.259106 | orchestrator | 2025-05-28 19:25:42.259117 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-28 19:25:42.259128 | orchestrator | 2025-05-28 19:25:42.259139 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 19:25:42.259150 | orchestrator | Wednesday 28 May 2025 19:23:26 +0000 (0:00:00.305) 0:00:01.128 ********* 2025-05-28 19:25:42.259161 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:25:42.259172 | orchestrator | 2025-05-28 19:25:42.259183 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-28 19:25:42.259194 | orchestrator | Wednesday 28 May 2025 19:23:26 +0000 (0:00:00.746) 0:00:01.875 ********* 2025-05-28 19:25:42.259205 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 19:25:42.259216 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 19:25:42.259227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-28 19:25:42.259238 | orchestrator | 2025-05-28 19:25:42.259249 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-28 19:25:42.259260 | orchestrator | Wednesday 28 May 2025 19:23:27 +0000 (0:00:00.866) 0:00:02.742 ********* 2025-05-28 19:25:42.259275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.259328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.259356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.259373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.259389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.259417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.259431 | orchestrator | 2025-05-28 19:25:42.259443 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 19:25:42.259455 | orchestrator | Wednesday 28 May 2025 19:23:29 +0000 (0:00:01.699) 0:00:04.441 ********* 2025-05-28 19:25:42.259467 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:25:42.259479 | orchestrator | 2025-05-28 19:25:42.259491 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-28 19:25:42.259503 | orchestrator | Wednesday 28 May 2025 19:23:30 +0000 (0:00:01.090) 0:00:05.531 ********* 2025-05-28 19:25:42.259526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.259540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.259553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.259579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.259601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.259616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.259630 | orchestrator | 2025-05-28 19:25:42.259642 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-28 19:25:42.259655 | orchestrator | Wednesday 28 May 2025 19:23:34 +0000 (0:00:03.855) 0:00:09.387 ********* 2025-05-28 19:25:42.259673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:25:42.259693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:25:42.259707 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:25:42.259726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:25:42.259739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:25:42.259757 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:25:42.259769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:25:42.259786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:25:42.259798 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:25:42.259809 | orchestrator | 2025-05-28 19:25:42.259821 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-28 19:25:42.259833 | orchestrator | Wednesday 28 May 2025 19:23:35 +0000 (0:00:00.993) 0:00:10.380 ********* 2025-05-28 19:25:42.259850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:25:42.259863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:25:42.259881 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:25:42.259893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:25:42.259910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:25:42.259922 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:25:42.259939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-28 19:25:42.259952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-28 19:25:42.259970 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:25:42.259981 | orchestrator | 2025-05-28 19:25:42.259993 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-28 19:25:42.260004 | orchestrator | Wednesday 28 May 2025 19:23:36 +0000 (0:00:01.406) 0:00:11.786 ********* 2025-05-28 19:25:42.260015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.260027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.260044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.260064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.260109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.260122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.260134 | orchestrator | 2025-05-28 19:25:42.260145 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-28 19:25:42.260157 | orchestrator | Wednesday 28 May 2025 19:23:39 +0000 (0:00:02.608) 0:00:14.394 ********* 2025-05-28 19:25:42.260173 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:25:42.260184 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:25:42.260195 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:25:42.260206 | orchestrator | 2025-05-28 19:25:42.260217 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-28 19:25:42.260228 | orchestrator | Wednesday 28 May 2025 19:23:42 +0000 (0:00:03.089) 0:00:17.484 ********* 2025-05-28 19:25:42.260275 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:25:42.260288 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:25:42.260299 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:25:42.260310 | orchestrator | 2025-05-28 19:25:42.260321 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-28 19:25:42.260332 | orchestrator | Wednesday 28 May 2025 19:23:44 +0000 (0:00:01.952) 0:00:19.436 ********* 2025-05-28 19:25:42.260484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.260509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.260522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-28 19:25:42.260540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.260561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.260580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-28 19:25:42.260592 | orchestrator | 2025-05-28 19:25:42.260603 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 19:25:42.260615 | orchestrator | Wednesday 28 May 2025 19:23:47 +0000 (0:00:03.317) 0:00:22.754 ********* 2025-05-28 19:25:42.260626 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:25:42.260637 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:25:42.260648 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:25:42.260659 | orchestrator | 2025-05-28 19:25:42.260670 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-28 19:25:42.260681 | orchestrator | Wednesday 28 May 2025 19:23:48 +0000 (0:00:00.546) 0:00:23.301 ********* 2025-05-28 19:25:42.260692 | orchestrator | 2025-05-28 19:25:42.260703 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-28 19:25:42.260714 | orchestrator | Wednesday 28 May 2025 19:23:48 +0000 (0:00:00.280) 0:00:23.581 ********* 2025-05-28 19:25:42.260725 | orchestrator | 2025-05-28 19:25:42.260736 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-28 19:25:42.260747 | orchestrator | Wednesday 28 May 2025 19:23:48 +0000 (0:00:00.056) 0:00:23.638 ********* 2025-05-28 19:25:42.260758 | orchestrator | 2025-05-28 19:25:42.260769 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-28 19:25:42.260780 | orchestrator | Wednesday 28 May 2025 19:23:48 +0000 (0:00:00.070) 0:00:23.708 ********* 2025-05-28 19:25:42.260791 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:25:42.260802 | orchestrator | 2025-05-28 19:25:42.260813 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-28 19:25:42.260828 | orchestrator | Wednesday 28 May 2025 19:23:48 +0000 (0:00:00.229) 0:00:23.938 ********* 2025-05-28 19:25:42.260840 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:25:42.260851 | orchestrator | 2025-05-28 19:25:42.260862 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-28 19:25:42.260872 | orchestrator | Wednesday 28 May 2025 19:23:49 +0000 (0:00:00.843) 0:00:24.781 ********* 2025-05-28 19:25:42.260883 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:25:42.260894 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:25:42.260905 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:25:42.260917 | orchestrator | 2025-05-28 19:25:42.260927 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-28 19:25:42.260938 | orchestrator | Wednesday 28 May 2025 19:24:27 +0000 (0:00:37.434) 0:01:02.216 ********* 2025-05-28 19:25:42.260949 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:25:42.260960 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:25:42.260971 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:25:42.260982 | orchestrator | 2025-05-28 19:25:42.260999 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-28 19:25:42.261010 | orchestrator | Wednesday 28 May 2025 19:25:30 +0000 (0:01:02.997) 0:02:05.213 ********* 2025-05-28 19:25:42.261026 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:25:42.261037 | orchestrator | 2025-05-28 19:25:42.261048 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-28 19:25:42.261059 | orchestrator | Wednesday 28 May 2025 19:25:30 +0000 (0:00:00.756) 0:02:05.970 ********* 2025-05-28 19:25:42.261090 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:25:42.261103 | orchestrator | 2025-05-28 19:25:42.261116 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-28 19:25:42.261128 | orchestrator | Wednesday 28 May 2025 19:25:33 +0000 (0:00:02.530) 0:02:08.501 ********* 2025-05-28 19:25:42.261140 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:25:42.261152 | orchestrator | 2025-05-28 19:25:42.261165 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-28 19:25:42.261176 | orchestrator | Wednesday 28 May 2025 19:25:35 +0000 (0:00:02.420) 0:02:10.922 ********* 2025-05-28 19:25:42.261189 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:25:42.261201 | orchestrator | 2025-05-28 19:25:42.261214 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-28 19:25:42.261226 | orchestrator | Wednesday 28 May 2025 19:25:38 +0000 (0:00:02.920) 0:02:13.842 ********* 2025-05-28 19:25:42.261238 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:25:42.261251 | orchestrator | 2025-05-28 19:25:42.261269 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:25:42.261284 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 19:25:42.261297 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:25:42.261309 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-28 19:25:42.261322 | orchestrator | 2025-05-28 19:25:42.261422 | orchestrator | 2025-05-28 19:25:42.261438 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:25:42.261451 | orchestrator | Wednesday 28 May 2025 19:25:41 +0000 (0:00:02.948) 0:02:16.790 ********* 2025-05-28 19:25:42.261462 | orchestrator | =============================================================================== 2025-05-28 19:25:42.261473 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 63.00s 2025-05-28 19:25:42.261484 | orchestrator | opensearch : Restart opensearch container ------------------------------ 37.43s 2025-05-28 19:25:42.261495 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.86s 2025-05-28 19:25:42.261506 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.32s 2025-05-28 19:25:42.261517 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.09s 2025-05-28 19:25:42.261528 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.95s 2025-05-28 19:25:42.261539 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.92s 2025-05-28 19:25:42.261549 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.61s 2025-05-28 19:25:42.261560 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.53s 2025-05-28 19:25:42.261571 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.42s 2025-05-28 19:25:42.261582 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.95s 2025-05-28 19:25:42.261593 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.70s 2025-05-28 19:25:42.261604 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.41s 2025-05-28 19:25:42.261623 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.09s 2025-05-28 19:25:42.261644 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.99s 2025-05-28 19:25:42.261659 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.87s 2025-05-28 19:25:42.261671 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.84s 2025-05-28 19:25:42.261682 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.76s 2025-05-28 19:25:42.261693 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-05-28 19:25:42.261703 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-05-28 19:25:42.261715 | orchestrator | 2025-05-28 19:25:42 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:42.261731 | orchestrator | 2025-05-28 19:25:42 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:42.262396 | orchestrator | 2025-05-28 19:25:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:42.262697 | orchestrator | 2025-05-28 19:25:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:45.312834 | orchestrator | 2025-05-28 19:25:45 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:45.314447 | orchestrator | 2025-05-28 19:25:45 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:45.316670 | orchestrator | 2025-05-28 19:25:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:45.316739 | orchestrator | 2025-05-28 19:25:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:48.366108 | orchestrator | 2025-05-28 19:25:48 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:48.366897 | orchestrator | 2025-05-28 19:25:48 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:48.368501 | orchestrator | 2025-05-28 19:25:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:48.368706 | orchestrator | 2025-05-28 19:25:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:51.442295 | orchestrator | 2025-05-28 19:25:51 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:51.444571 | orchestrator | 2025-05-28 19:25:51 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:51.445950 | orchestrator | 2025-05-28 19:25:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:51.446584 | orchestrator | 2025-05-28 19:25:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:54.498170 | orchestrator | 2025-05-28 19:25:54 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:54.500750 | orchestrator | 2025-05-28 19:25:54 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:54.503595 | orchestrator | 2025-05-28 19:25:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:54.504149 | orchestrator | 2025-05-28 19:25:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:25:57.568622 | orchestrator | 2025-05-28 19:25:57 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:25:57.570391 | orchestrator | 2025-05-28 19:25:57 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:25:57.571885 | orchestrator | 2025-05-28 19:25:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:25:57.571947 | orchestrator | 2025-05-28 19:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:00.618287 | orchestrator | 2025-05-28 19:26:00 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:00.620087 | orchestrator | 2025-05-28 19:26:00 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:00.621333 | orchestrator | 2025-05-28 19:26:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:00.621370 | orchestrator | 2025-05-28 19:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:03.670835 | orchestrator | 2025-05-28 19:26:03 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:03.672685 | orchestrator | 2025-05-28 19:26:03 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:03.674397 | orchestrator | 2025-05-28 19:26:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:03.674432 | orchestrator | 2025-05-28 19:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:06.725210 | orchestrator | 2025-05-28 19:26:06 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:06.726154 | orchestrator | 2025-05-28 19:26:06 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:06.727782 | orchestrator | 2025-05-28 19:26:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:06.727918 | orchestrator | 2025-05-28 19:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:09.778610 | orchestrator | 2025-05-28 19:26:09 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:09.781234 | orchestrator | 2025-05-28 19:26:09 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:09.782604 | orchestrator | 2025-05-28 19:26:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:09.782630 | orchestrator | 2025-05-28 19:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:12.824752 | orchestrator | 2025-05-28 19:26:12 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:12.826221 | orchestrator | 2025-05-28 19:26:12 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:12.828387 | orchestrator | 2025-05-28 19:26:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:12.828424 | orchestrator | 2025-05-28 19:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:15.865626 | orchestrator | 2025-05-28 19:26:15 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:15.866148 | orchestrator | 2025-05-28 19:26:15 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:15.866894 | orchestrator | 2025-05-28 19:26:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:15.866918 | orchestrator | 2025-05-28 19:26:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:18.917252 | orchestrator | 2025-05-28 19:26:18 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:18.920551 | orchestrator | 2025-05-28 19:26:18 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:18.921742 | orchestrator | 2025-05-28 19:26:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:18.922115 | orchestrator | 2025-05-28 19:26:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:21.970836 | orchestrator | 2025-05-28 19:26:21 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:21.971883 | orchestrator | 2025-05-28 19:26:21 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:21.973400 | orchestrator | 2025-05-28 19:26:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:21.973800 | orchestrator | 2025-05-28 19:26:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:25.028781 | orchestrator | 2025-05-28 19:26:25 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:25.029979 | orchestrator | 2025-05-28 19:26:25 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:25.031287 | orchestrator | 2025-05-28 19:26:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:25.031459 | orchestrator | 2025-05-28 19:26:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:28.091319 | orchestrator | 2025-05-28 19:26:28 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:28.091427 | orchestrator | 2025-05-28 19:26:28 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:28.092503 | orchestrator | 2025-05-28 19:26:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:28.092697 | orchestrator | 2025-05-28 19:26:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:31.147295 | orchestrator | 2025-05-28 19:26:31 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:31.147387 | orchestrator | 2025-05-28 19:26:31 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state STARTED 2025-05-28 19:26:31.150201 | orchestrator | 2025-05-28 19:26:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:31.150229 | orchestrator | 2025-05-28 19:26:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:34.210451 | orchestrator | 2025-05-28 19:26:34 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:34.220690 | orchestrator | 2025-05-28 19:26:34 | INFO  | Task 861d5d77-15e1-49a8-901b-1a764adafb35 is in state SUCCESS 2025-05-28 19:26:34.222337 | orchestrator | 2025-05-28 19:26:34.222372 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-28 19:26:34.222385 | orchestrator | 2025-05-28 19:26:34.222397 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-28 19:26:34.222410 | orchestrator | 2025-05-28 19:26:34.222422 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-28 19:26:34.222434 | orchestrator | Wednesday 28 May 2025 19:12:53 +0000 (0:00:02.003) 0:00:02.003 ********* 2025-05-28 19:26:34.222446 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.222458 | orchestrator | 2025-05-28 19:26:34.222469 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-28 19:26:34.222480 | orchestrator | Wednesday 28 May 2025 19:12:55 +0000 (0:00:01.760) 0:00:03.763 ********* 2025-05-28 19:26:34.222492 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:26:34.222503 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 19:26:34.222515 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 19:26:34.222526 | orchestrator | 2025-05-28 19:26:34.222537 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-28 19:26:34.222561 | orchestrator | Wednesday 28 May 2025 19:12:56 +0000 (0:00:00.839) 0:00:04.603 ********* 2025-05-28 19:26:34.222595 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.222606 | orchestrator | 2025-05-28 19:26:34.222617 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-28 19:26:34.222628 | orchestrator | Wednesday 28 May 2025 19:12:57 +0000 (0:00:01.616) 0:00:06.219 ********* 2025-05-28 19:26:34.222639 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.222650 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.222661 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.222672 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.222683 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.222694 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.222705 | orchestrator | 2025-05-28 19:26:34.222716 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-28 19:26:34.222727 | orchestrator | Wednesday 28 May 2025 19:12:59 +0000 (0:00:02.062) 0:00:08.281 ********* 2025-05-28 19:26:34.222738 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.222749 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.222760 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.222770 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.222781 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.222792 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.222803 | orchestrator | 2025-05-28 19:26:34.222814 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-28 19:26:34.222846 | orchestrator | Wednesday 28 May 2025 19:13:01 +0000 (0:00:01.138) 0:00:09.420 ********* 2025-05-28 19:26:34.222858 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.222869 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.222880 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.222891 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.222902 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.222914 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.222927 | orchestrator | 2025-05-28 19:26:34.222939 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-28 19:26:34.222952 | orchestrator | Wednesday 28 May 2025 19:13:02 +0000 (0:00:01.550) 0:00:10.970 ********* 2025-05-28 19:26:34.222964 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.222995 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.223007 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.223020 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.223031 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.223042 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.223052 | orchestrator | 2025-05-28 19:26:34.223069 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-28 19:26:34.223093 | orchestrator | Wednesday 28 May 2025 19:13:03 +0000 (0:00:01.168) 0:00:12.139 ********* 2025-05-28 19:26:34.223104 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.223115 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.223126 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.223136 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.223147 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.223158 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.223190 | orchestrator | 2025-05-28 19:26:34.223206 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-28 19:26:34.223217 | orchestrator | Wednesday 28 May 2025 19:13:05 +0000 (0:00:01.282) 0:00:13.421 ********* 2025-05-28 19:26:34.223234 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.223245 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.223255 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.223266 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.223277 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.223288 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.223299 | orchestrator | 2025-05-28 19:26:34.223310 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-28 19:26:34.223330 | orchestrator | Wednesday 28 May 2025 19:13:06 +0000 (0:00:01.663) 0:00:15.085 ********* 2025-05-28 19:26:34.223341 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.223353 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.223364 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.223375 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.223386 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.223396 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.223407 | orchestrator | 2025-05-28 19:26:34.223418 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-28 19:26:34.223430 | orchestrator | Wednesday 28 May 2025 19:13:07 +0000 (0:00:00.673) 0:00:15.758 ********* 2025-05-28 19:26:34.223441 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.223452 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.223463 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.223474 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.223485 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.223495 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.223506 | orchestrator | 2025-05-28 19:26:34.223527 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-28 19:26:34.223539 | orchestrator | Wednesday 28 May 2025 19:13:08 +0000 (0:00:01.083) 0:00:16.842 ********* 2025-05-28 19:26:34.223550 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:26:34.223561 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:26:34.223572 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:26:34.223583 | orchestrator | 2025-05-28 19:26:34.223594 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-28 19:26:34.223605 | orchestrator | Wednesday 28 May 2025 19:13:09 +0000 (0:00:00.848) 0:00:17.690 ********* 2025-05-28 19:26:34.223616 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.223627 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.223638 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.223649 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.223660 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.223671 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.223681 | orchestrator | 2025-05-28 19:26:34.223692 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-28 19:26:34.223704 | orchestrator | Wednesday 28 May 2025 19:13:10 +0000 (0:00:01.566) 0:00:19.257 ********* 2025-05-28 19:26:34.223715 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:26:34.223731 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:26:34.223742 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:26:34.223753 | orchestrator | 2025-05-28 19:26:34.223764 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-28 19:26:34.223775 | orchestrator | Wednesday 28 May 2025 19:13:13 +0000 (0:00:02.777) 0:00:22.034 ********* 2025-05-28 19:26:34.223786 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.223797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.223808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.223819 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.223830 | orchestrator | 2025-05-28 19:26:34.223841 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-28 19:26:34.223852 | orchestrator | Wednesday 28 May 2025 19:13:14 +0000 (0:00:00.802) 0:00:22.836 ********* 2025-05-28 19:26:34.223864 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.223877 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.223896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.223921 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.223933 | orchestrator | 2025-05-28 19:26:34.223944 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-28 19:26:34.223955 | orchestrator | Wednesday 28 May 2025 19:13:15 +0000 (0:00:01.170) 0:00:24.007 ********* 2025-05-28 19:26:34.223968 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.223996 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.224008 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.224019 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224030 | orchestrator | 2025-05-28 19:26:34.224041 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-28 19:26:34.224061 | orchestrator | Wednesday 28 May 2025 19:13:16 +0000 (0:00:00.335) 0:00:24.343 ********* 2025-05-28 19:26:34.224075 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-28 19:13:11.570119', 'end': '2025-05-28 19:13:11.828894', 'delta': '0:00:00.258775', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.224094 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-28 19:13:12.456193', 'end': '2025-05-28 19:13:12.708350', 'delta': '0:00:00.252157', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.224107 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-28 19:13:13.255468', 'end': '2025-05-28 19:13:13.508025', 'delta': '0:00:00.252557', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-28 19:26:34.224125 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224136 | orchestrator | 2025-05-28 19:26:34.224147 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-28 19:26:34.224159 | orchestrator | Wednesday 28 May 2025 19:13:16 +0000 (0:00:00.271) 0:00:24.614 ********* 2025-05-28 19:26:34.224170 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.224181 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.224192 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.224203 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.224214 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.224225 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.224235 | orchestrator | 2025-05-28 19:26:34.224247 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-28 19:26:34.224258 | orchestrator | Wednesday 28 May 2025 19:13:18 +0000 (0:00:01.989) 0:00:26.603 ********* 2025-05-28 19:26:34.224269 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.224280 | orchestrator | 2025-05-28 19:26:34.224291 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-28 19:26:34.224302 | orchestrator | Wednesday 28 May 2025 19:13:19 +0000 (0:00:00.890) 0:00:27.494 ********* 2025-05-28 19:26:34.224313 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224324 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.224335 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.224346 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.224357 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.224367 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.224378 | orchestrator | 2025-05-28 19:26:34.224390 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-28 19:26:34.224400 | orchestrator | Wednesday 28 May 2025 19:13:20 +0000 (0:00:01.265) 0:00:28.760 ********* 2025-05-28 19:26:34.224411 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224422 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.224433 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.224444 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.224455 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.224466 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.224477 | orchestrator | 2025-05-28 19:26:34.224488 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-28 19:26:34.224499 | orchestrator | Wednesday 28 May 2025 19:13:23 +0000 (0:00:02.997) 0:00:31.757 ********* 2025-05-28 19:26:34.224510 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224521 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.224532 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.224543 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.224554 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.224564 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.224575 | orchestrator | 2025-05-28 19:26:34.224586 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-28 19:26:34.224597 | orchestrator | Wednesday 28 May 2025 19:13:24 +0000 (0:00:00.796) 0:00:32.553 ********* 2025-05-28 19:26:34.224614 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224626 | orchestrator | 2025-05-28 19:26:34.224638 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-28 19:26:34.224649 | orchestrator | Wednesday 28 May 2025 19:13:24 +0000 (0:00:00.378) 0:00:32.932 ********* 2025-05-28 19:26:34.224666 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224677 | orchestrator | 2025-05-28 19:26:34.224688 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-28 19:26:34.224699 | orchestrator | Wednesday 28 May 2025 19:13:24 +0000 (0:00:00.368) 0:00:33.301 ********* 2025-05-28 19:26:34.224710 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224721 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.224732 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.224742 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.224753 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.224764 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.224775 | orchestrator | 2025-05-28 19:26:34.224786 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-28 19:26:34.224797 | orchestrator | Wednesday 28 May 2025 19:13:25 +0000 (0:00:00.804) 0:00:34.105 ********* 2025-05-28 19:26:34.224808 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224819 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.224829 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.224840 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.224851 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.224867 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.224879 | orchestrator | 2025-05-28 19:26:34.224890 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-28 19:26:34.224901 | orchestrator | Wednesday 28 May 2025 19:13:27 +0000 (0:00:01.279) 0:00:35.384 ********* 2025-05-28 19:26:34.224911 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.224923 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.224933 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.224944 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.224955 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.224966 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.224992 | orchestrator | 2025-05-28 19:26:34.225003 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-28 19:26:34.225015 | orchestrator | Wednesday 28 May 2025 19:13:28 +0000 (0:00:01.107) 0:00:36.492 ********* 2025-05-28 19:26:34.225026 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.225037 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.225048 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.225059 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.225070 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.225081 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.225091 | orchestrator | 2025-05-28 19:26:34.225103 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-28 19:26:34.225114 | orchestrator | Wednesday 28 May 2025 19:13:29 +0000 (0:00:01.523) 0:00:38.016 ********* 2025-05-28 19:26:34.225125 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.225136 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.225147 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.225158 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.225169 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.225180 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.225191 | orchestrator | 2025-05-28 19:26:34.225202 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-28 19:26:34.225214 | orchestrator | Wednesday 28 May 2025 19:13:30 +0000 (0:00:00.977) 0:00:38.994 ********* 2025-05-28 19:26:34.225225 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.225235 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.225247 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.225258 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.225269 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.225280 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.225291 | orchestrator | 2025-05-28 19:26:34.225302 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-28 19:26:34.225318 | orchestrator | Wednesday 28 May 2025 19:13:31 +0000 (0:00:01.312) 0:00:40.306 ********* 2025-05-28 19:26:34.225330 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.225341 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.225352 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.225363 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.225374 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.225385 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.225396 | orchestrator | 2025-05-28 19:26:34.225407 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-28 19:26:34.225418 | orchestrator | Wednesday 28 May 2025 19:13:32 +0000 (0:00:00.961) 0:00:41.267 ********* 2025-05-28 19:26:34.225429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.225593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.225633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec', 'scsi-SQEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part1', 'scsi-SQEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part14', 'scsi-SQEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part15', 'scsi-SQEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part16', 'scsi-SQEMU_QEMU_HARDDISK_c51b3b6a-c51a-4126-90e5-19ce13e2ffec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.225745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.225757 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.225769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225849 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.225860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.225935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea', 'scsi-SQEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_ccd95cfc-fef5-427d-aa82-20b0aa69c6ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.225949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.225965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79c077cd--dd98--5cad--a8fa--86d8aa897eb3-osd--block--79c077cd--dd98--5cad--a8fa--86d8aa897eb3', 'dm-uuid-LVM-UBrlOBB861jVBN2oFE7flYtnx14OEDwnXwBJxe52At9drgNHuOzs8cxgMCMljOpr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--117a45ef--4e6c--5b76--bea4--f0c196d92690-osd--block--117a45ef--4e6c--5b76--bea4--f0c196d92690', 'dm-uuid-LVM-PnniarbhwZR82CJRkBx0Ja60r5xicpTcoxdkJGMjhdGMdoe25FLGJsA3G7WeFE7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226066 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.226077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--79c077cd--dd98--5cad--a8fa--86d8aa897eb3-osd--block--79c077cd--dd98--5cad--a8fa--86d8aa897eb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQ0K7d-FFRQ-fZ3L-gEwh-Nuf2-uOxH-WZESvb', 'scsi-0QEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801', 'scsi-SQEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--117a45ef--4e6c--5b76--bea4--f0c196d92690-osd--block--117a45ef--4e6c--5b76--bea4--f0c196d92690'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3cLG1x-g7xV-colf-9NHq-anwz-LgS6-fAz0xA', 'scsi-0QEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4', 'scsi-SQEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1', 'scsi-SQEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ed7399e--dc97--5c28--9f68--879666a39403-osd--block--3ed7399e--dc97--5c28--9f68--879666a39403', 'dm-uuid-LVM-5Ln76UVwsZb24Ce9e2cHJzEQ4hrh0bAq3kxwIGAllXNeqToFd7SL1rej2NELmu9n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0344b063--3cec--5ade--bfbf--9241287811af-osd--block--0344b063--3cec--5ade--bfbf--9241287811af', 'dm-uuid-LVM-6TANzYwr6bZprSERlwam74GDjlqkdRykRYD1B1Fjvyn41kwk7th3THqF0s7lywXA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226410 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.226422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3ed7399e--dc97--5c28--9f68--879666a39403-osd--block--3ed7399e--dc97--5c28--9f68--879666a39403'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aRJ4k3-5LQf-h667-kmFR-wyn4-FzEt-NeG8j5', 'scsi-0QEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836', 'scsi-SQEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0344b063--3cec--5ade--bfbf--9241287811af-osd--block--0344b063--3cec--5ade--bfbf--9241287811af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hRFZlh-qctn-bnB0-9ZhO-JNBh-muQM-00Vczq', 'scsi-0QEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445', 'scsi-SQEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd', 'scsi-SQEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226530 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.226547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5db078c0--6128--52c2--9305--54ff671eda75-osd--block--5db078c0--6128--52c2--9305--54ff671eda75', 'dm-uuid-LVM-RYQZnbBGY0TjyDJuc1CjDrS2jsaKjqQ2ZbxT5CuUvGXG4GryREvNmdF1Q0N8AJTE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fda1a2ce--c0e6--5c69--aaa5--109883ddc076-osd--block--fda1a2ce--c0e6--5c69--aaa5--109883ddc076', 'dm-uuid-LVM-0TgdcD8Enf5FOTXaY0BoayYOBZYs3eXfEyLrrYuvDOobtP3Ih6O52eEwd9C6PIMF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:26:34.226705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5db078c0--6128--52c2--9305--54ff671eda75-osd--block--5db078c0--6128--52c2--9305--54ff671eda75'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1wqmIn-xPVd-MNFA-i9g8-7vPd-fqn2-5J6j0X', 'scsi-0QEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1', 'scsi-SQEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fda1a2ce--c0e6--5c69--aaa5--109883ddc076-osd--block--fda1a2ce--c0e6--5c69--aaa5--109883ddc076'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bmLQ6T-MA5V-AVU9-lelo-200Y-U4YJ-1BfG3W', 'scsi-0QEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d', 'scsi-SQEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d', 'scsi-SQEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:26:34.226784 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.226795 | orchestrator | 2025-05-28 19:26:34.226807 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-28 19:26:34.226818 | orchestrator | Wednesday 28 May 2025 19:13:34 +0000 (0:00:01.983) 0:00:43.251 ********* 2025-05-28 19:26:34.226829 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.226840 | orchestrator | 2025-05-28 19:26:34.226852 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-28 19:26:34.226863 | orchestrator | Wednesday 28 May 2025 19:13:35 +0000 (0:00:00.322) 0:00:43.573 ********* 2025-05-28 19:26:34.226874 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.226885 | orchestrator | 2025-05-28 19:26:34.226896 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-28 19:26:34.226907 | orchestrator | Wednesday 28 May 2025 19:13:35 +0000 (0:00:00.185) 0:00:43.759 ********* 2025-05-28 19:26:34.226918 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.226929 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.226940 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.226951 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.226961 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.226989 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.227001 | orchestrator | 2025-05-28 19:26:34.227013 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-28 19:26:34.227024 | orchestrator | Wednesday 28 May 2025 19:13:36 +0000 (0:00:01.159) 0:00:44.919 ********* 2025-05-28 19:26:34.227035 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.227046 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.227056 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.227074 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.227085 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.227096 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.227107 | orchestrator | 2025-05-28 19:26:34.227118 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-28 19:26:34.227129 | orchestrator | Wednesday 28 May 2025 19:13:38 +0000 (0:00:01.918) 0:00:46.838 ********* 2025-05-28 19:26:34.227140 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.227151 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.227161 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.227172 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.227183 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.227194 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.227227 | orchestrator | 2025-05-28 19:26:34.227244 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-28 19:26:34.227255 | orchestrator | Wednesday 28 May 2025 19:13:39 +0000 (0:00:01.110) 0:00:47.949 ********* 2025-05-28 19:26:34.227271 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.227282 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.227293 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.227304 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.227315 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.227326 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.227337 | orchestrator | 2025-05-28 19:26:34.227348 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-28 19:26:34.227372 | orchestrator | Wednesday 28 May 2025 19:13:40 +0000 (0:00:01.096) 0:00:49.045 ********* 2025-05-28 19:26:34.227383 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.227394 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.227405 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.227421 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.227432 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.227443 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.227454 | orchestrator | 2025-05-28 19:26:34.227465 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-28 19:26:34.227476 | orchestrator | Wednesday 28 May 2025 19:13:41 +0000 (0:00:00.719) 0:00:49.764 ********* 2025-05-28 19:26:34.227486 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.227505 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.227515 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.227526 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.227537 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.227548 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.227559 | orchestrator | 2025-05-28 19:26:34.227570 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-28 19:26:34.227581 | orchestrator | Wednesday 28 May 2025 19:13:42 +0000 (0:00:01.451) 0:00:51.215 ********* 2025-05-28 19:26:34.227592 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.227603 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.227614 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.227624 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.227635 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.227646 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.227657 | orchestrator | 2025-05-28 19:26:34.227668 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-28 19:26:34.227749 | orchestrator | Wednesday 28 May 2025 19:13:44 +0000 (0:00:01.217) 0:00:52.433 ********* 2025-05-28 19:26:34.227771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.227806 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.227818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-28 19:26:34.227829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.227840 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-28 19:26:34.227851 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-28 19:26:34.227862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:26:34.227889 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.227901 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-28 19:26:34.227912 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-28 19:26:34.227923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:26:34.227934 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.227945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:26:34.227956 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-28 19:26:34.227967 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.227997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:26:34.228009 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.228020 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:26:34.228036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:26:34.228048 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:26:34.228066 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.228077 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:26:34.228088 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:26:34.228099 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.228118 | orchestrator | 2025-05-28 19:26:34.228129 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-28 19:26:34.228140 | orchestrator | Wednesday 28 May 2025 19:13:47 +0000 (0:00:03.414) 0:00:55.847 ********* 2025-05-28 19:26:34.228151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.228162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-28 19:26:34.228173 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-28 19:26:34.228184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.228194 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-28 19:26:34.228205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.228216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:26:34.228227 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-28 19:26:34.228245 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.228260 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.228271 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:26:34.228282 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-28 19:26:34.228293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:26:34.228305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:26:34.228316 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:26:34.228327 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-28 19:26:34.228338 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.228349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:26:34.228360 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.228371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:26:34.228382 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.228393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:26:34.228404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:26:34.228415 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.228426 | orchestrator | 2025-05-28 19:26:34.228437 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-28 19:26:34.228448 | orchestrator | Wednesday 28 May 2025 19:13:51 +0000 (0:00:03.925) 0:00:59.772 ********* 2025-05-28 19:26:34.228459 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:26:34.228470 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-28 19:26:34.228481 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-28 19:26:34.228492 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-28 19:26:34.228503 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 19:26:34.228514 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-28 19:26:34.228525 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-28 19:26:34.228536 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-28 19:26:34.228547 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-28 19:26:34.228563 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-28 19:26:34.228575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 19:26:34.228585 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-28 19:26:34.228596 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-28 19:26:34.228607 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-28 19:26:34.228618 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-28 19:26:34.228629 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-28 19:26:34.228646 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-28 19:26:34.228663 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-28 19:26:34.228674 | orchestrator | 2025-05-28 19:26:34.228686 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-28 19:26:34.228704 | orchestrator | Wednesday 28 May 2025 19:13:58 +0000 (0:00:07.018) 0:01:06.791 ********* 2025-05-28 19:26:34.228716 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.228733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.228744 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.228755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-28 19:26:34.228766 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-28 19:26:34.228777 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-28 19:26:34.228788 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.228799 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-28 19:26:34.228810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-28 19:26:34.228821 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.228839 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-28 19:26:34.228850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:26:34.228861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:26:34.228872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:26:34.228883 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.228898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:26:34.228909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:26:34.228921 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.228932 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:26:34.228943 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.228954 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:26:34.229023 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:26:34.229036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:26:34.229048 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.229059 | orchestrator | 2025-05-28 19:26:34.229070 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-28 19:26:34.229081 | orchestrator | Wednesday 28 May 2025 19:14:00 +0000 (0:00:01.906) 0:01:08.698 ********* 2025-05-28 19:26:34.229092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.229103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.229114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.229126 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.229137 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-28 19:26:34.229148 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-28 19:26:34.229159 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-28 19:26:34.229170 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-28 19:26:34.229181 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-28 19:26:34.229192 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-28 19:26:34.229203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:26:34.229214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:26:34.229225 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.229236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:26:34.229247 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:26:34.229258 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:26:34.229276 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:26:34.229287 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.229299 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.229310 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.229321 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:26:34.229331 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:26:34.229343 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:26:34.229354 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.229364 | orchestrator | 2025-05-28 19:26:34.229374 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-28 19:26:34.229384 | orchestrator | Wednesday 28 May 2025 19:14:01 +0000 (0:00:01.341) 0:01:10.039 ********* 2025-05-28 19:26:34.229394 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-28 19:26:34.229404 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:26:34.229414 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:26:34.229424 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:26:34.229434 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-28 19:26:34.229444 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:26:34.229454 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:26:34.229464 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:26:34.229474 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-28 19:26:34.229489 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:26:34.229500 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:26:34.229510 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:26:34.229520 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:26:34.229530 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:26:34.229539 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:26:34.229549 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.229559 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.229569 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:26:34.229579 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:26:34.229589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:26:34.229599 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.229609 | orchestrator | 2025-05-28 19:26:34.229628 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-28 19:26:34.229639 | orchestrator | Wednesday 28 May 2025 19:14:03 +0000 (0:00:01.428) 0:01:11.468 ********* 2025-05-28 19:26:34.229649 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.229659 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.229669 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.229679 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.229689 | orchestrator | 2025-05-28 19:26:34.229700 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.229716 | orchestrator | Wednesday 28 May 2025 19:14:04 +0000 (0:00:01.249) 0:01:12.718 ********* 2025-05-28 19:26:34.229726 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.229736 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.229745 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.229755 | orchestrator | 2025-05-28 19:26:34.229765 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.229775 | orchestrator | Wednesday 28 May 2025 19:14:05 +0000 (0:00:00.703) 0:01:13.421 ********* 2025-05-28 19:26:34.229785 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.229794 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.229805 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.229815 | orchestrator | 2025-05-28 19:26:34.229825 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.229834 | orchestrator | Wednesday 28 May 2025 19:14:05 +0000 (0:00:00.841) 0:01:14.263 ********* 2025-05-28 19:26:34.229844 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.229854 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.229864 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.229874 | orchestrator | 2025-05-28 19:26:34.229884 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.229894 | orchestrator | Wednesday 28 May 2025 19:14:06 +0000 (0:00:00.732) 0:01:14.995 ********* 2025-05-28 19:26:34.229904 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.229914 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.229924 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.229933 | orchestrator | 2025-05-28 19:26:34.229943 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.229953 | orchestrator | Wednesday 28 May 2025 19:14:07 +0000 (0:00:01.205) 0:01:16.201 ********* 2025-05-28 19:26:34.229963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.230010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.230551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.230562 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.230570 | orchestrator | 2025-05-28 19:26:34.230578 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.230587 | orchestrator | Wednesday 28 May 2025 19:14:08 +0000 (0:00:01.049) 0:01:17.250 ********* 2025-05-28 19:26:34.230595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.230603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.230611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.230619 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.230627 | orchestrator | 2025-05-28 19:26:34.230636 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.230644 | orchestrator | Wednesday 28 May 2025 19:14:10 +0000 (0:00:01.154) 0:01:18.404 ********* 2025-05-28 19:26:34.230652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.230660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.230668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.230676 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.230684 | orchestrator | 2025-05-28 19:26:34.230692 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.230700 | orchestrator | Wednesday 28 May 2025 19:14:11 +0000 (0:00:01.790) 0:01:20.195 ********* 2025-05-28 19:26:34.230708 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.230717 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.230725 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.230733 | orchestrator | 2025-05-28 19:26:34.230741 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.230764 | orchestrator | Wednesday 28 May 2025 19:14:12 +0000 (0:00:00.942) 0:01:21.137 ********* 2025-05-28 19:26:34.230773 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 19:26:34.230781 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-28 19:26:34.230789 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-28 19:26:34.230797 | orchestrator | 2025-05-28 19:26:34.230805 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.230813 | orchestrator | Wednesday 28 May 2025 19:14:14 +0000 (0:00:01.731) 0:01:22.869 ********* 2025-05-28 19:26:34.230821 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.230829 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.230837 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.230846 | orchestrator | 2025-05-28 19:26:34.230854 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.230862 | orchestrator | Wednesday 28 May 2025 19:14:15 +0000 (0:00:00.808) 0:01:23.678 ********* 2025-05-28 19:26:34.230870 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.230878 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.230886 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.230894 | orchestrator | 2025-05-28 19:26:34.230902 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.230910 | orchestrator | Wednesday 28 May 2025 19:14:16 +0000 (0:00:01.085) 0:01:24.763 ********* 2025-05-28 19:26:34.230918 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.230926 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.230939 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.230947 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.230955 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.230963 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.231014 | orchestrator | 2025-05-28 19:26:34.231023 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.231031 | orchestrator | Wednesday 28 May 2025 19:14:17 +0000 (0:00:00.961) 0:01:25.725 ********* 2025-05-28 19:26:34.231040 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.231048 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.231056 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.231064 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.231072 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.231080 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.231088 | orchestrator | 2025-05-28 19:26:34.231096 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.231104 | orchestrator | Wednesday 28 May 2025 19:14:18 +0000 (0:00:01.139) 0:01:26.864 ********* 2025-05-28 19:26:34.231112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.231120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.231128 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:26:34.231136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.231144 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.231152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:26:34.231160 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:26:34.231168 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:26:34.231176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:26:34.231184 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.231193 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:26:34.231207 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.231216 | orchestrator | 2025-05-28 19:26:34.231225 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-28 19:26:34.231234 | orchestrator | Wednesday 28 May 2025 19:14:19 +0000 (0:00:01.136) 0:01:28.001 ********* 2025-05-28 19:26:34.231243 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.231252 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.231260 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.231269 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.231278 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.231287 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.231296 | orchestrator | 2025-05-28 19:26:34.231305 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-28 19:26:34.231314 | orchestrator | Wednesday 28 May 2025 19:14:20 +0000 (0:00:00.888) 0:01:28.890 ********* 2025-05-28 19:26:34.231323 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:26:34.231332 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:26:34.231341 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:26:34.231350 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-28 19:26:34.231358 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 19:26:34.231367 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 19:26:34.231376 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 19:26:34.231385 | orchestrator | 2025-05-28 19:26:34.231394 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-28 19:26:34.231403 | orchestrator | Wednesday 28 May 2025 19:14:21 +0000 (0:00:01.010) 0:01:29.901 ********* 2025-05-28 19:26:34.231412 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:26:34.231427 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:26:34.231436 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:26:34.231445 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-28 19:26:34.231454 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 19:26:34.231464 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 19:26:34.231473 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 19:26:34.231481 | orchestrator | 2025-05-28 19:26:34.231490 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-28 19:26:34.231499 | orchestrator | Wednesday 28 May 2025 19:14:23 +0000 (0:00:02.034) 0:01:31.935 ********* 2025-05-28 19:26:34.231509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.231518 | orchestrator | 2025-05-28 19:26:34.231527 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-28 19:26:34.231540 | orchestrator | Wednesday 28 May 2025 19:14:25 +0000 (0:00:01.702) 0:01:33.637 ********* 2025-05-28 19:26:34.231550 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.231559 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.231568 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.231576 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.231584 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.231592 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.231600 | orchestrator | 2025-05-28 19:26:34.231608 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-28 19:26:34.231621 | orchestrator | Wednesday 28 May 2025 19:14:26 +0000 (0:00:01.142) 0:01:34.780 ********* 2025-05-28 19:26:34.231629 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.231637 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.231645 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.231653 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.231661 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.231669 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.231677 | orchestrator | 2025-05-28 19:26:34.231685 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-28 19:26:34.231693 | orchestrator | Wednesday 28 May 2025 19:14:28 +0000 (0:00:01.655) 0:01:36.436 ********* 2025-05-28 19:26:34.231702 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.231710 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.231718 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.231726 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.231734 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.231742 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.231750 | orchestrator | 2025-05-28 19:26:34.231758 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-28 19:26:34.231766 | orchestrator | Wednesday 28 May 2025 19:14:29 +0000 (0:00:01.343) 0:01:37.780 ********* 2025-05-28 19:26:34.231774 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.231783 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.231791 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.231799 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.231807 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.231815 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.231823 | orchestrator | 2025-05-28 19:26:34.231831 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-28 19:26:34.231839 | orchestrator | Wednesday 28 May 2025 19:14:31 +0000 (0:00:01.653) 0:01:39.433 ********* 2025-05-28 19:26:34.231847 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.231855 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.231863 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.231871 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.231879 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.231887 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.231895 | orchestrator | 2025-05-28 19:26:34.231903 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-28 19:26:34.231911 | orchestrator | Wednesday 28 May 2025 19:14:31 +0000 (0:00:00.827) 0:01:40.260 ********* 2025-05-28 19:26:34.231919 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.231927 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.231935 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.231943 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.231951 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.231959 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.231967 | orchestrator | 2025-05-28 19:26:34.231986 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-28 19:26:34.231994 | orchestrator | Wednesday 28 May 2025 19:14:32 +0000 (0:00:01.050) 0:01:41.311 ********* 2025-05-28 19:26:34.232002 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232010 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232018 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232026 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232034 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232042 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232050 | orchestrator | 2025-05-28 19:26:34.232058 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-28 19:26:34.232066 | orchestrator | Wednesday 28 May 2025 19:14:33 +0000 (0:00:00.665) 0:01:41.976 ********* 2025-05-28 19:26:34.232075 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232083 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232095 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232103 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232111 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232119 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232127 | orchestrator | 2025-05-28 19:26:34.232135 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-28 19:26:34.232143 | orchestrator | Wednesday 28 May 2025 19:14:34 +0000 (0:00:00.908) 0:01:42.884 ********* 2025-05-28 19:26:34.232155 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232164 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232172 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232180 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232188 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232196 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232204 | orchestrator | 2025-05-28 19:26:34.232212 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-28 19:26:34.232221 | orchestrator | Wednesday 28 May 2025 19:14:35 +0000 (0:00:00.716) 0:01:43.601 ********* 2025-05-28 19:26:34.232229 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232237 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232245 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232253 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232261 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232268 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232276 | orchestrator | 2025-05-28 19:26:34.232285 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-28 19:26:34.232293 | orchestrator | Wednesday 28 May 2025 19:14:36 +0000 (0:00:01.043) 0:01:44.644 ********* 2025-05-28 19:26:34.232301 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.232309 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.232317 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.232325 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.232333 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.232341 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.232349 | orchestrator | 2025-05-28 19:26:34.232364 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-28 19:26:34.232372 | orchestrator | Wednesday 28 May 2025 19:14:37 +0000 (0:00:01.503) 0:01:46.148 ********* 2025-05-28 19:26:34.232380 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232389 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232397 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232405 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232413 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232421 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232429 | orchestrator | 2025-05-28 19:26:34.232437 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-28 19:26:34.232445 | orchestrator | Wednesday 28 May 2025 19:14:39 +0000 (0:00:01.223) 0:01:47.371 ********* 2025-05-28 19:26:34.232453 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.232461 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.232469 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.232478 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232486 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232494 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232502 | orchestrator | 2025-05-28 19:26:34.232510 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-28 19:26:34.232518 | orchestrator | Wednesday 28 May 2025 19:14:39 +0000 (0:00:00.814) 0:01:48.185 ********* 2025-05-28 19:26:34.232527 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232535 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232543 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232551 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.232559 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.232567 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.232580 | orchestrator | 2025-05-28 19:26:34.232588 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-28 19:26:34.232596 | orchestrator | Wednesday 28 May 2025 19:14:40 +0000 (0:00:00.846) 0:01:49.032 ********* 2025-05-28 19:26:34.232604 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232612 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232620 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232628 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.232636 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.232644 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.232652 | orchestrator | 2025-05-28 19:26:34.232661 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-28 19:26:34.232669 | orchestrator | Wednesday 28 May 2025 19:14:41 +0000 (0:00:00.692) 0:01:49.725 ********* 2025-05-28 19:26:34.232677 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232685 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232693 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232701 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.232709 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.232717 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.232725 | orchestrator | 2025-05-28 19:26:34.232733 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-28 19:26:34.232741 | orchestrator | Wednesday 28 May 2025 19:14:42 +0000 (0:00:00.863) 0:01:50.589 ********* 2025-05-28 19:26:34.232749 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232757 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232765 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232773 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232781 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232789 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232797 | orchestrator | 2025-05-28 19:26:34.232805 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-28 19:26:34.232813 | orchestrator | Wednesday 28 May 2025 19:14:42 +0000 (0:00:00.619) 0:01:51.208 ********* 2025-05-28 19:26:34.232821 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.232829 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.232837 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.232845 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232853 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232861 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232869 | orchestrator | 2025-05-28 19:26:34.232877 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-28 19:26:34.232885 | orchestrator | Wednesday 28 May 2025 19:14:43 +0000 (0:00:00.870) 0:01:52.078 ********* 2025-05-28 19:26:34.232894 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.232902 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.232910 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.232918 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.232926 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.232934 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.232942 | orchestrator | 2025-05-28 19:26:34.232950 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-28 19:26:34.232962 | orchestrator | Wednesday 28 May 2025 19:14:44 +0000 (0:00:00.666) 0:01:52.745 ********* 2025-05-28 19:26:34.233006 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.233016 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.233024 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.233032 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.233040 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.233048 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.233056 | orchestrator | 2025-05-28 19:26:34.233064 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.233072 | orchestrator | Wednesday 28 May 2025 19:14:45 +0000 (0:00:00.887) 0:01:53.632 ********* 2025-05-28 19:26:34.233080 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233093 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233101 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233109 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233117 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233125 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233133 | orchestrator | 2025-05-28 19:26:34.233141 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.233149 | orchestrator | Wednesday 28 May 2025 19:14:45 +0000 (0:00:00.687) 0:01:54.320 ********* 2025-05-28 19:26:34.233157 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233165 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233173 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233181 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233193 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233201 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233209 | orchestrator | 2025-05-28 19:26:34.233217 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.233225 | orchestrator | Wednesday 28 May 2025 19:14:46 +0000 (0:00:00.991) 0:01:55.312 ********* 2025-05-28 19:26:34.233233 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233241 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233249 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233257 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233265 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233273 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233281 | orchestrator | 2025-05-28 19:26:34.233289 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.233297 | orchestrator | Wednesday 28 May 2025 19:14:47 +0000 (0:00:00.841) 0:01:56.153 ********* 2025-05-28 19:26:34.233305 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233313 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233321 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233328 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233336 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233344 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233352 | orchestrator | 2025-05-28 19:26:34.233360 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.233368 | orchestrator | Wednesday 28 May 2025 19:14:48 +0000 (0:00:01.048) 0:01:57.202 ********* 2025-05-28 19:26:34.233376 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233384 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233392 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233400 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233406 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233413 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233420 | orchestrator | 2025-05-28 19:26:34.233427 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.233433 | orchestrator | Wednesday 28 May 2025 19:14:49 +0000 (0:00:00.921) 0:01:58.123 ********* 2025-05-28 19:26:34.233440 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233447 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233454 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233461 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233467 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233474 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233481 | orchestrator | 2025-05-28 19:26:34.233488 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.233494 | orchestrator | Wednesday 28 May 2025 19:14:51 +0000 (0:00:01.229) 0:01:59.353 ********* 2025-05-28 19:26:34.233501 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233508 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233515 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233525 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233532 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233538 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233545 | orchestrator | 2025-05-28 19:26:34.233552 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.233559 | orchestrator | Wednesday 28 May 2025 19:14:51 +0000 (0:00:00.765) 0:02:00.119 ********* 2025-05-28 19:26:34.233566 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233573 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233579 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233586 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233593 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233599 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233606 | orchestrator | 2025-05-28 19:26:34.233613 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.233620 | orchestrator | Wednesday 28 May 2025 19:14:52 +0000 (0:00:01.112) 0:02:01.231 ********* 2025-05-28 19:26:34.233627 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233633 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233640 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233647 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233653 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233660 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233666 | orchestrator | 2025-05-28 19:26:34.233673 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.233680 | orchestrator | Wednesday 28 May 2025 19:14:53 +0000 (0:00:00.716) 0:02:01.948 ********* 2025-05-28 19:26:34.233687 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233694 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233705 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233712 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233719 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233725 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233732 | orchestrator | 2025-05-28 19:26:34.233739 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.233746 | orchestrator | Wednesday 28 May 2025 19:14:54 +0000 (0:00:01.025) 0:02:02.973 ********* 2025-05-28 19:26:34.233753 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233759 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233766 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233773 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233780 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233786 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233793 | orchestrator | 2025-05-28 19:26:34.233800 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.233807 | orchestrator | Wednesday 28 May 2025 19:14:55 +0000 (0:00:00.788) 0:02:03.762 ********* 2025-05-28 19:26:34.233814 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233820 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233827 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233834 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233840 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233847 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.233854 | orchestrator | 2025-05-28 19:26:34.233864 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.233871 | orchestrator | Wednesday 28 May 2025 19:14:56 +0000 (0:00:00.994) 0:02:04.756 ********* 2025-05-28 19:26:34.233878 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.233884 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.233891 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.233898 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.233908 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.233915 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.233922 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.233929 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.233935 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.233942 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.233949 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.233955 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.233962 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.233969 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.233987 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.233994 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.234000 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.234007 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234014 | orchestrator | 2025-05-28 19:26:34.234049 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.234056 | orchestrator | Wednesday 28 May 2025 19:14:57 +0000 (0:00:00.918) 0:02:05.675 ********* 2025-05-28 19:26:34.234063 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-28 19:26:34.234070 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-28 19:26:34.234077 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234084 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-28 19:26:34.234091 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-28 19:26:34.234097 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-28 19:26:34.234104 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-28 19:26:34.234111 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234118 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-28 19:26:34.234125 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-28 19:26:34.234131 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234138 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-28 19:26:34.234145 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-28 19:26:34.234152 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234158 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234165 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-28 19:26:34.234172 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-28 19:26:34.234178 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234185 | orchestrator | 2025-05-28 19:26:34.234192 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.234199 | orchestrator | Wednesday 28 May 2025 19:14:58 +0000 (0:00:01.071) 0:02:06.746 ********* 2025-05-28 19:26:34.234206 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234212 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234219 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234226 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234233 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234239 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234246 | orchestrator | 2025-05-28 19:26:34.234253 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.234260 | orchestrator | Wednesday 28 May 2025 19:14:59 +0000 (0:00:00.791) 0:02:07.538 ********* 2025-05-28 19:26:34.234267 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234273 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234280 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234287 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234293 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234304 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234311 | orchestrator | 2025-05-28 19:26:34.234318 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.234333 | orchestrator | Wednesday 28 May 2025 19:15:00 +0000 (0:00:00.855) 0:02:08.394 ********* 2025-05-28 19:26:34.234346 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234353 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234360 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234366 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234373 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234380 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234386 | orchestrator | 2025-05-28 19:26:34.234393 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.234400 | orchestrator | Wednesday 28 May 2025 19:15:00 +0000 (0:00:00.716) 0:02:09.110 ********* 2025-05-28 19:26:34.234407 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234414 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234420 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234427 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234434 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234440 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234447 | orchestrator | 2025-05-28 19:26:34.234454 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.234461 | orchestrator | Wednesday 28 May 2025 19:15:01 +0000 (0:00:00.869) 0:02:09.980 ********* 2025-05-28 19:26:34.234468 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234474 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234481 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234488 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234497 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234504 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234511 | orchestrator | 2025-05-28 19:26:34.234518 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.234525 | orchestrator | Wednesday 28 May 2025 19:15:02 +0000 (0:00:00.657) 0:02:10.638 ********* 2025-05-28 19:26:34.234532 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234538 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234545 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234552 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234558 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234565 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234572 | orchestrator | 2025-05-28 19:26:34.234578 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.234585 | orchestrator | Wednesday 28 May 2025 19:15:03 +0000 (0:00:00.811) 0:02:11.449 ********* 2025-05-28 19:26:34.234592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.234599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.234606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.234612 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234619 | orchestrator | 2025-05-28 19:26:34.234626 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.234633 | orchestrator | Wednesday 28 May 2025 19:15:03 +0000 (0:00:00.420) 0:02:11.870 ********* 2025-05-28 19:26:34.234639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.234646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.234653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.234660 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234666 | orchestrator | 2025-05-28 19:26:34.234673 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.234680 | orchestrator | Wednesday 28 May 2025 19:15:03 +0000 (0:00:00.430) 0:02:12.301 ********* 2025-05-28 19:26:34.234691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.234698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.234705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.234711 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234718 | orchestrator | 2025-05-28 19:26:34.234725 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.234732 | orchestrator | Wednesday 28 May 2025 19:15:04 +0000 (0:00:00.414) 0:02:12.716 ********* 2025-05-28 19:26:34.234739 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234745 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234752 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234759 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234765 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234772 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234779 | orchestrator | 2025-05-28 19:26:34.234785 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.234792 | orchestrator | Wednesday 28 May 2025 19:15:05 +0000 (0:00:00.647) 0:02:13.363 ********* 2025-05-28 19:26:34.234799 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.234806 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234813 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.234819 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234826 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.234833 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234840 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.234847 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234853 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.234860 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234867 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.234874 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234880 | orchestrator | 2025-05-28 19:26:34.234887 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.234894 | orchestrator | Wednesday 28 May 2025 19:15:06 +0000 (0:00:01.062) 0:02:14.426 ********* 2025-05-28 19:26:34.234901 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234907 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234914 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234921 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.234928 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.234934 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.234941 | orchestrator | 2025-05-28 19:26:34.234952 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.234959 | orchestrator | Wednesday 28 May 2025 19:15:06 +0000 (0:00:00.662) 0:02:15.088 ********* 2025-05-28 19:26:34.234966 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.234982 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.234989 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.234995 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235002 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235009 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235016 | orchestrator | 2025-05-28 19:26:34.235022 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.235029 | orchestrator | Wednesday 28 May 2025 19:15:07 +0000 (0:00:00.830) 0:02:15.919 ********* 2025-05-28 19:26:34.235036 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.235043 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235049 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.235056 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235063 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.235073 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235080 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.235087 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235094 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.235103 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235110 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.235117 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235123 | orchestrator | 2025-05-28 19:26:34.235130 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.235137 | orchestrator | Wednesday 28 May 2025 19:15:08 +0000 (0:00:00.809) 0:02:16.729 ********* 2025-05-28 19:26:34.235143 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235150 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235157 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235164 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.235171 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235177 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.235184 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235191 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.235198 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235204 | orchestrator | 2025-05-28 19:26:34.235211 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.235218 | orchestrator | Wednesday 28 May 2025 19:15:09 +0000 (0:00:00.888) 0:02:17.617 ********* 2025-05-28 19:26:34.235225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.235231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.235238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.235245 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235251 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-28 19:26:34.235258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-28 19:26:34.235265 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-28 19:26:34.235272 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235278 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-28 19:26:34.235285 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-28 19:26:34.235292 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-28 19:26:34.235298 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.235312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.235318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.235325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:26:34.235331 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:26:34.235338 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:26:34.235345 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235352 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:26:34.235365 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:26:34.235372 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:26:34.235378 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235385 | orchestrator | 2025-05-28 19:26:34.235392 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.235402 | orchestrator | Wednesday 28 May 2025 19:15:10 +0000 (0:00:01.684) 0:02:19.301 ********* 2025-05-28 19:26:34.235409 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235415 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235422 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235429 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235436 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235442 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235449 | orchestrator | 2025-05-28 19:26:34.235456 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-28 19:26:34.235463 | orchestrator | Wednesday 28 May 2025 19:15:12 +0000 (0:00:01.278) 0:02:20.580 ********* 2025-05-28 19:26:34.235469 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235476 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235486 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235493 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.235500 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235507 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.235514 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235520 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.235527 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235534 | orchestrator | 2025-05-28 19:26:34.235541 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-28 19:26:34.235547 | orchestrator | Wednesday 28 May 2025 19:15:13 +0000 (0:00:01.316) 0:02:21.897 ********* 2025-05-28 19:26:34.235554 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235561 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235567 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235574 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235581 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235587 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235594 | orchestrator | 2025-05-28 19:26:34.235601 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-28 19:26:34.235608 | orchestrator | Wednesday 28 May 2025 19:15:14 +0000 (0:00:01.329) 0:02:23.226 ********* 2025-05-28 19:26:34.235614 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235621 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235628 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235638 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235644 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235651 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235658 | orchestrator | 2025-05-28 19:26:34.235665 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-28 19:26:34.235671 | orchestrator | Wednesday 28 May 2025 19:15:16 +0000 (0:00:01.643) 0:02:24.870 ********* 2025-05-28 19:26:34.235678 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.235685 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.235691 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.235698 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.235705 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.235711 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.235718 | orchestrator | 2025-05-28 19:26:34.235725 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-28 19:26:34.235731 | orchestrator | Wednesday 28 May 2025 19:15:18 +0000 (0:00:01.765) 0:02:26.635 ********* 2025-05-28 19:26:34.235738 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.235745 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.235751 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.235758 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.235765 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.235772 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.235782 | orchestrator | 2025-05-28 19:26:34.235789 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-28 19:26:34.235796 | orchestrator | Wednesday 28 May 2025 19:15:20 +0000 (0:00:02.655) 0:02:29.291 ********* 2025-05-28 19:26:34.235803 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.235810 | orchestrator | 2025-05-28 19:26:34.235817 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-28 19:26:34.235824 | orchestrator | Wednesday 28 May 2025 19:15:22 +0000 (0:00:01.260) 0:02:30.552 ********* 2025-05-28 19:26:34.235830 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235837 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235844 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235850 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235857 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235864 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235871 | orchestrator | 2025-05-28 19:26:34.235877 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-28 19:26:34.235884 | orchestrator | Wednesday 28 May 2025 19:15:23 +0000 (0:00:00.884) 0:02:31.437 ********* 2025-05-28 19:26:34.235891 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.235897 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.235904 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.235911 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.235917 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.235924 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.235931 | orchestrator | 2025-05-28 19:26:34.235938 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-28 19:26:34.235944 | orchestrator | Wednesday 28 May 2025 19:15:23 +0000 (0:00:00.644) 0:02:32.081 ********* 2025-05-28 19:26:34.235951 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 19:26:34.235958 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 19:26:34.235964 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 19:26:34.235998 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 19:26:34.236006 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 19:26:34.236013 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 19:26:34.236019 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 19:26:34.236026 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 19:26:34.236033 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 19:26:34.236039 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 19:26:34.236050 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-28 19:26:34.236058 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-28 19:26:34.236064 | orchestrator | 2025-05-28 19:26:34.236071 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-28 19:26:34.236078 | orchestrator | Wednesday 28 May 2025 19:15:25 +0000 (0:00:01.545) 0:02:33.627 ********* 2025-05-28 19:26:34.236084 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.236091 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.236098 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.236105 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.236111 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.236118 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.236125 | orchestrator | 2025-05-28 19:26:34.236138 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-28 19:26:34.236145 | orchestrator | Wednesday 28 May 2025 19:15:26 +0000 (0:00:00.942) 0:02:34.569 ********* 2025-05-28 19:26:34.236152 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236159 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236165 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236172 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236179 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236186 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236192 | orchestrator | 2025-05-28 19:26:34.236199 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-28 19:26:34.236209 | orchestrator | Wednesday 28 May 2025 19:15:27 +0000 (0:00:00.932) 0:02:35.502 ********* 2025-05-28 19:26:34.236216 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236223 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236229 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236235 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236242 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236248 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236254 | orchestrator | 2025-05-28 19:26:34.236260 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-28 19:26:34.236267 | orchestrator | Wednesday 28 May 2025 19:15:27 +0000 (0:00:00.778) 0:02:36.281 ********* 2025-05-28 19:26:34.236274 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.236280 | orchestrator | 2025-05-28 19:26:34.236286 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-28 19:26:34.236292 | orchestrator | Wednesday 28 May 2025 19:15:29 +0000 (0:00:01.819) 0:02:38.100 ********* 2025-05-28 19:26:34.236298 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.236305 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.236311 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.236317 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.236324 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.236330 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.236336 | orchestrator | 2025-05-28 19:26:34.236343 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-28 19:26:34.236349 | orchestrator | Wednesday 28 May 2025 19:16:16 +0000 (0:00:46.694) 0:03:24.794 ********* 2025-05-28 19:26:34.236355 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 19:26:34.236361 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 19:26:34.236368 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 19:26:34.236374 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236380 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 19:26:34.236386 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 19:26:34.236393 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 19:26:34.236399 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236405 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 19:26:34.236412 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 19:26:34.236418 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 19:26:34.236424 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236431 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 19:26:34.236437 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 19:26:34.236443 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 19:26:34.236457 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236464 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 19:26:34.236470 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 19:26:34.236476 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 19:26:34.236482 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236489 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-28 19:26:34.236495 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-28 19:26:34.236501 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-28 19:26:34.236508 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236514 | orchestrator | 2025-05-28 19:26:34.236520 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-28 19:26:34.236526 | orchestrator | Wednesday 28 May 2025 19:16:17 +0000 (0:00:01.108) 0:03:25.902 ********* 2025-05-28 19:26:34.236532 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236542 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236549 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236555 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236561 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236567 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236574 | orchestrator | 2025-05-28 19:26:34.236580 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-28 19:26:34.236586 | orchestrator | Wednesday 28 May 2025 19:16:18 +0000 (0:00:00.646) 0:03:26.549 ********* 2025-05-28 19:26:34.236593 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236599 | orchestrator | 2025-05-28 19:26:34.236605 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-28 19:26:34.236611 | orchestrator | Wednesday 28 May 2025 19:16:18 +0000 (0:00:00.193) 0:03:26.743 ********* 2025-05-28 19:26:34.236618 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236624 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236630 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236636 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236642 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236649 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236655 | orchestrator | 2025-05-28 19:26:34.236661 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-28 19:26:34.236668 | orchestrator | Wednesday 28 May 2025 19:16:19 +0000 (0:00:00.980) 0:03:27.723 ********* 2025-05-28 19:26:34.236674 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236682 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236689 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236695 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236701 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236708 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236714 | orchestrator | 2025-05-28 19:26:34.236720 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-28 19:26:34.236726 | orchestrator | Wednesday 28 May 2025 19:16:20 +0000 (0:00:00.634) 0:03:28.358 ********* 2025-05-28 19:26:34.236733 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236739 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236745 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236751 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236758 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236764 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236770 | orchestrator | 2025-05-28 19:26:34.236776 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-28 19:26:34.236783 | orchestrator | Wednesday 28 May 2025 19:16:21 +0000 (0:00:00.979) 0:03:29.337 ********* 2025-05-28 19:26:34.236793 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.236799 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.236805 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.236812 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.236818 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.236824 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.236830 | orchestrator | 2025-05-28 19:26:34.236837 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-28 19:26:34.236843 | orchestrator | Wednesday 28 May 2025 19:16:22 +0000 (0:00:01.902) 0:03:31.240 ********* 2025-05-28 19:26:34.236849 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.236855 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.236862 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.236868 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.236874 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.236880 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.236886 | orchestrator | 2025-05-28 19:26:34.236892 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-28 19:26:34.236899 | orchestrator | Wednesday 28 May 2025 19:16:23 +0000 (0:00:00.692) 0:03:31.933 ********* 2025-05-28 19:26:34.236905 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.236912 | orchestrator | 2025-05-28 19:26:34.236918 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-28 19:26:34.236925 | orchestrator | Wednesday 28 May 2025 19:16:24 +0000 (0:00:01.336) 0:03:33.269 ********* 2025-05-28 19:26:34.236931 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.236937 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.236943 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.236950 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.236956 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.236962 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.236969 | orchestrator | 2025-05-28 19:26:34.236985 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-28 19:26:34.236992 | orchestrator | Wednesday 28 May 2025 19:16:25 +0000 (0:00:00.990) 0:03:34.259 ********* 2025-05-28 19:26:34.236998 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.237004 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.237011 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.237017 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.237023 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.237029 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.237035 | orchestrator | 2025-05-28 19:26:34.237042 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-28 19:26:34.237048 | orchestrator | Wednesday 28 May 2025 19:16:26 +0000 (0:00:00.782) 0:03:35.041 ********* 2025-05-28 19:26:34.237054 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.237060 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.237067 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.237073 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.237079 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.237085 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.237092 | orchestrator | 2025-05-28 19:26:34.237098 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-28 19:26:34.237104 | orchestrator | Wednesday 28 May 2025 19:16:27 +0000 (0:00:01.053) 0:03:36.094 ********* 2025-05-28 19:26:34.237110 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.237117 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.237123 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.237129 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.237139 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.237145 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.237152 | orchestrator | 2025-05-28 19:26:34.237162 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-28 19:26:34.237168 | orchestrator | Wednesday 28 May 2025 19:16:28 +0000 (0:00:00.726) 0:03:36.821 ********* 2025-05-28 19:26:34.237174 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.237181 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.237187 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.237193 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.237199 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.237206 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.237212 | orchestrator | 2025-05-28 19:26:34.237218 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-28 19:26:34.237224 | orchestrator | Wednesday 28 May 2025 19:16:29 +0000 (0:00:01.263) 0:03:38.084 ********* 2025-05-28 19:26:34.237231 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.237237 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.237243 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.237250 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.237256 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.237262 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.237268 | orchestrator | 2025-05-28 19:26:34.237275 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-28 19:26:34.237284 | orchestrator | Wednesday 28 May 2025 19:16:30 +0000 (0:00:00.843) 0:03:38.928 ********* 2025-05-28 19:26:34.237290 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.237296 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.237303 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.237309 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.237315 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.237321 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.237328 | orchestrator | 2025-05-28 19:26:34.237334 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-28 19:26:34.237340 | orchestrator | Wednesday 28 May 2025 19:16:31 +0000 (0:00:01.011) 0:03:39.939 ********* 2025-05-28 19:26:34.237347 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.237353 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.237359 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.237365 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.237372 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.237378 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.237384 | orchestrator | 2025-05-28 19:26:34.237391 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.237397 | orchestrator | Wednesday 28 May 2025 19:16:33 +0000 (0:00:01.545) 0:03:41.484 ********* 2025-05-28 19:26:34.237403 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.237410 | orchestrator | 2025-05-28 19:26:34.237416 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-28 19:26:34.237422 | orchestrator | Wednesday 28 May 2025 19:16:34 +0000 (0:00:01.282) 0:03:42.767 ********* 2025-05-28 19:26:34.237429 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-28 19:26:34.237435 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-28 19:26:34.237441 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-28 19:26:34.237447 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-28 19:26:34.237454 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-28 19:26:34.237460 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-28 19:26:34.237466 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-28 19:26:34.237472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-28 19:26:34.237478 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-28 19:26:34.237485 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-28 19:26:34.237494 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-28 19:26:34.237501 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-28 19:26:34.237507 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-28 19:26:34.237513 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-28 19:26:34.237519 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-28 19:26:34.237526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-28 19:26:34.237532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-28 19:26:34.237538 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-28 19:26:34.237544 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-28 19:26:34.237551 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-28 19:26:34.237557 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-28 19:26:34.237563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-28 19:26:34.237569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-28 19:26:34.237575 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-28 19:26:34.237582 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-28 19:26:34.237588 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-28 19:26:34.237594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-28 19:26:34.237600 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-28 19:26:34.237606 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-28 19:26:34.237613 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-28 19:26:34.237619 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-28 19:26:34.237628 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-28 19:26:34.237635 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-28 19:26:34.237641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-28 19:26:34.237647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-28 19:26:34.237653 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-28 19:26:34.237660 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-28 19:26:34.237666 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-28 19:26:34.237672 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-28 19:26:34.237679 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 19:26:34.237685 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-28 19:26:34.237691 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 19:26:34.237697 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-28 19:26:34.237703 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-28 19:26:34.237709 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 19:26:34.237716 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 19:26:34.237724 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 19:26:34.237731 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 19:26:34.237737 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 19:26:34.237743 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-28 19:26:34.237749 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 19:26:34.237756 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 19:26:34.237765 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 19:26:34.237772 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 19:26:34.237778 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 19:26:34.237784 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-28 19:26:34.237790 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 19:26:34.237796 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 19:26:34.237803 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 19:26:34.237809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 19:26:34.237815 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 19:26:34.237821 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 19:26:34.237827 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-28 19:26:34.237833 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 19:26:34.237840 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 19:26:34.237846 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 19:26:34.237852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 19:26:34.237858 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 19:26:34.237864 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-28 19:26:34.237871 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 19:26:34.237877 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 19:26:34.237883 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 19:26:34.237889 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 19:26:34.237895 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 19:26:34.237901 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-28 19:26:34.237908 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-28 19:26:34.237914 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 19:26:34.237920 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-28 19:26:34.237926 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 19:26:34.237932 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-28 19:26:34.237939 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-28 19:26:34.237945 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-28 19:26:34.237951 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-28 19:26:34.237957 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-28 19:26:34.237964 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-28 19:26:34.238103 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-28 19:26:34.238117 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-28 19:26:34.238123 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-28 19:26:34.238130 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-28 19:26:34.238184 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-28 19:26:34.238193 | orchestrator | 2025-05-28 19:26:34.238203 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.238210 | orchestrator | Wednesday 28 May 2025 19:16:40 +0000 (0:00:06.017) 0:03:48.784 ********* 2025-05-28 19:26:34.238222 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238228 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238235 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238241 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.238248 | orchestrator | 2025-05-28 19:26:34.238254 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-28 19:26:34.238260 | orchestrator | Wednesday 28 May 2025 19:16:41 +0000 (0:00:01.319) 0:03:50.104 ********* 2025-05-28 19:26:34.238266 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.238273 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.238283 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.238289 | orchestrator | 2025-05-28 19:26:34.238296 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-28 19:26:34.238302 | orchestrator | Wednesday 28 May 2025 19:16:42 +0000 (0:00:01.231) 0:03:51.335 ********* 2025-05-28 19:26:34.238308 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.238314 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.238321 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.238327 | orchestrator | 2025-05-28 19:26:34.238333 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.238339 | orchestrator | Wednesday 28 May 2025 19:16:44 +0000 (0:00:01.407) 0:03:52.742 ********* 2025-05-28 19:26:34.238345 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238352 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238358 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238364 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.238371 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.238377 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.238383 | orchestrator | 2025-05-28 19:26:34.238389 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.238396 | orchestrator | Wednesday 28 May 2025 19:16:45 +0000 (0:00:00.997) 0:03:53.739 ********* 2025-05-28 19:26:34.238402 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238407 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238413 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238418 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.238423 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.238429 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.238434 | orchestrator | 2025-05-28 19:26:34.238440 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.238445 | orchestrator | Wednesday 28 May 2025 19:16:46 +0000 (0:00:00.706) 0:03:54.446 ********* 2025-05-28 19:26:34.238451 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238456 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238462 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238467 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.238472 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.238478 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.238483 | orchestrator | 2025-05-28 19:26:34.238489 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.238494 | orchestrator | Wednesday 28 May 2025 19:16:46 +0000 (0:00:00.878) 0:03:55.324 ********* 2025-05-28 19:26:34.238500 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238509 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238514 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238520 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.238525 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.238531 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.238536 | orchestrator | 2025-05-28 19:26:34.238542 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.238547 | orchestrator | Wednesday 28 May 2025 19:16:47 +0000 (0:00:00.683) 0:03:56.008 ********* 2025-05-28 19:26:34.238553 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238558 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238564 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238569 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.238574 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.238580 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.238585 | orchestrator | 2025-05-28 19:26:34.238591 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.238596 | orchestrator | Wednesday 28 May 2025 19:16:48 +0000 (0:00:00.851) 0:03:56.859 ********* 2025-05-28 19:26:34.238602 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238607 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238613 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238618 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.238623 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.238629 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.238634 | orchestrator | 2025-05-28 19:26:34.238640 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.238680 | orchestrator | Wednesday 28 May 2025 19:16:49 +0000 (0:00:00.664) 0:03:57.524 ********* 2025-05-28 19:26:34.238688 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238693 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238702 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238708 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.238713 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.238719 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.238725 | orchestrator | 2025-05-28 19:26:34.238731 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.238737 | orchestrator | Wednesday 28 May 2025 19:16:50 +0000 (0:00:00.871) 0:03:58.395 ********* 2025-05-28 19:26:34.238743 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238749 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238754 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238760 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.238766 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.238772 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.238778 | orchestrator | 2025-05-28 19:26:34.238784 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.238790 | orchestrator | Wednesday 28 May 2025 19:16:50 +0000 (0:00:00.751) 0:03:59.147 ********* 2025-05-28 19:26:34.238796 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238801 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238810 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238816 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.238821 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.238827 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.238833 | orchestrator | 2025-05-28 19:26:34.238838 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.238844 | orchestrator | Wednesday 28 May 2025 19:16:53 +0000 (0:00:02.460) 0:04:01.607 ********* 2025-05-28 19:26:34.238849 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238855 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238860 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238870 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.238876 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.238881 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.238887 | orchestrator | 2025-05-28 19:26:34.238892 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.238898 | orchestrator | Wednesday 28 May 2025 19:16:53 +0000 (0:00:00.720) 0:04:02.328 ********* 2025-05-28 19:26:34.238903 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.238909 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.238914 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.238920 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.238925 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.238931 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.238936 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.238942 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.238947 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.238953 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.238958 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.238964 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.238980 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.238986 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.238991 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.238997 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.239002 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.239008 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239013 | orchestrator | 2025-05-28 19:26:34.239019 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.239024 | orchestrator | Wednesday 28 May 2025 19:16:55 +0000 (0:00:01.040) 0:04:03.369 ********* 2025-05-28 19:26:34.239030 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-28 19:26:34.239035 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-28 19:26:34.239041 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239046 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-28 19:26:34.239051 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-28 19:26:34.239057 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239063 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-28 19:26:34.239068 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-28 19:26:34.239073 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239079 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-28 19:26:34.239085 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-28 19:26:34.239090 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-28 19:26:34.239096 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-28 19:26:34.239101 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-28 19:26:34.239107 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-28 19:26:34.239112 | orchestrator | 2025-05-28 19:26:34.239118 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.239124 | orchestrator | Wednesday 28 May 2025 19:16:55 +0000 (0:00:00.809) 0:04:04.178 ********* 2025-05-28 19:26:34.239129 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239135 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239140 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239146 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.239151 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.239157 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.239162 | orchestrator | 2025-05-28 19:26:34.239168 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.239179 | orchestrator | Wednesday 28 May 2025 19:16:56 +0000 (0:00:01.040) 0:04:05.219 ********* 2025-05-28 19:26:34.239185 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239228 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239235 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239241 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239246 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.239255 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239261 | orchestrator | 2025-05-28 19:26:34.239266 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.239272 | orchestrator | Wednesday 28 May 2025 19:16:57 +0000 (0:00:00.648) 0:04:05.868 ********* 2025-05-28 19:26:34.239277 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239283 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239288 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239294 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239299 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.239305 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239310 | orchestrator | 2025-05-28 19:26:34.239316 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.239321 | orchestrator | Wednesday 28 May 2025 19:16:58 +0000 (0:00:00.994) 0:04:06.862 ********* 2025-05-28 19:26:34.239327 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239332 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239338 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239343 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239349 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.239357 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239363 | orchestrator | 2025-05-28 19:26:34.239368 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.239374 | orchestrator | Wednesday 28 May 2025 19:16:59 +0000 (0:00:00.780) 0:04:07.642 ********* 2025-05-28 19:26:34.239379 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239385 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239390 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239396 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239401 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.239407 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239412 | orchestrator | 2025-05-28 19:26:34.239418 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.239423 | orchestrator | Wednesday 28 May 2025 19:17:00 +0000 (0:00:01.066) 0:04:08.709 ********* 2025-05-28 19:26:34.239429 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239434 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239440 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239445 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.239451 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.239456 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.239462 | orchestrator | 2025-05-28 19:26:34.239467 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.239473 | orchestrator | Wednesday 28 May 2025 19:17:01 +0000 (0:00:00.923) 0:04:09.632 ********* 2025-05-28 19:26:34.239478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.239484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.239489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.239495 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239500 | orchestrator | 2025-05-28 19:26:34.239506 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.239511 | orchestrator | Wednesday 28 May 2025 19:17:02 +0000 (0:00:00.732) 0:04:10.364 ********* 2025-05-28 19:26:34.239517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.239526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.239532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.239538 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239543 | orchestrator | 2025-05-28 19:26:34.239549 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.239554 | orchestrator | Wednesday 28 May 2025 19:17:02 +0000 (0:00:00.937) 0:04:11.302 ********* 2025-05-28 19:26:34.239560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.239565 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.239571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.239576 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239582 | orchestrator | 2025-05-28 19:26:34.239587 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.239593 | orchestrator | Wednesday 28 May 2025 19:17:03 +0000 (0:00:00.473) 0:04:11.776 ********* 2025-05-28 19:26:34.239598 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239604 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239609 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239615 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.239621 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.239626 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.239632 | orchestrator | 2025-05-28 19:26:34.239637 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.239643 | orchestrator | Wednesday 28 May 2025 19:17:04 +0000 (0:00:00.753) 0:04:12.529 ********* 2025-05-28 19:26:34.239648 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.239654 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239659 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.239665 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239670 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.239676 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239681 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 19:26:34.239687 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-28 19:26:34.239693 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-28 19:26:34.239698 | orchestrator | 2025-05-28 19:26:34.239704 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.239709 | orchestrator | Wednesday 28 May 2025 19:17:05 +0000 (0:00:01.568) 0:04:14.098 ********* 2025-05-28 19:26:34.239715 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239754 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239762 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239768 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239777 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.239782 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239788 | orchestrator | 2025-05-28 19:26:34.239793 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.239799 | orchestrator | Wednesday 28 May 2025 19:17:06 +0000 (0:00:00.699) 0:04:14.797 ********* 2025-05-28 19:26:34.239804 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239810 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239815 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239821 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239826 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.239832 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239837 | orchestrator | 2025-05-28 19:26:34.239843 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.239849 | orchestrator | Wednesday 28 May 2025 19:17:07 +0000 (0:00:01.048) 0:04:15.846 ********* 2025-05-28 19:26:34.239854 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.239863 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.239869 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239874 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.239880 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239888 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.239894 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239899 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239905 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.239910 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.239916 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.239921 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.239927 | orchestrator | 2025-05-28 19:26:34.239932 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.239938 | orchestrator | Wednesday 28 May 2025 19:17:08 +0000 (0:00:00.950) 0:04:16.796 ********* 2025-05-28 19:26:34.239943 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.239949 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.239954 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.239960 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.239965 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.239998 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.240004 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.240010 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.240016 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.240021 | orchestrator | 2025-05-28 19:26:34.240027 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.240032 | orchestrator | Wednesday 28 May 2025 19:17:09 +0000 (0:00:00.949) 0:04:17.746 ********* 2025-05-28 19:26:34.240038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.240043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.240049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.240054 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.240060 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-28 19:26:34.240065 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-28 19:26:34.240071 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-28 19:26:34.240076 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.240082 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-28 19:26:34.240087 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-28 19:26:34.240092 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-28 19:26:34.240098 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.240103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.240109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.240114 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:26:34.240120 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:26:34.240125 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:26:34.240131 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.240136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.240142 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240147 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:26:34.240153 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:26:34.240162 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:26:34.240168 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.240173 | orchestrator | 2025-05-28 19:26:34.240179 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.240184 | orchestrator | Wednesday 28 May 2025 19:17:11 +0000 (0:00:01.872) 0:04:19.618 ********* 2025-05-28 19:26:34.240190 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.240195 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.240201 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.240206 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.240212 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.240217 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.240223 | orchestrator | 2025-05-28 19:26:34.240265 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-28 19:26:34.240274 | orchestrator | Wednesday 28 May 2025 19:17:15 +0000 (0:00:04.571) 0:04:24.189 ********* 2025-05-28 19:26:34.240279 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.240288 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.240294 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.240299 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.240304 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.240310 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.240315 | orchestrator | 2025-05-28 19:26:34.240321 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-28 19:26:34.240326 | orchestrator | Wednesday 28 May 2025 19:17:16 +0000 (0:00:01.021) 0:04:25.211 ********* 2025-05-28 19:26:34.240332 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240337 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.240343 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.240348 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.240353 | orchestrator | 2025-05-28 19:26:34.240358 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-28 19:26:34.240363 | orchestrator | Wednesday 28 May 2025 19:17:18 +0000 (0:00:01.162) 0:04:26.374 ********* 2025-05-28 19:26:34.240367 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.240372 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.240377 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.240382 | orchestrator | 2025-05-28 19:26:34.240389 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-28 19:26:34.240394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.240399 | orchestrator | 2025-05-28 19:26:34.240404 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-28 19:26:34.240409 | orchestrator | Wednesday 28 May 2025 19:17:19 +0000 (0:00:01.195) 0:04:27.569 ********* 2025-05-28 19:26:34.240414 | orchestrator | 2025-05-28 19:26:34.240419 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-28 19:26:34.240424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.240428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.240433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.240438 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240443 | orchestrator | 2025-05-28 19:26:34.240448 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-28 19:26:34.240453 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.240457 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.240462 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.240467 | orchestrator | 2025-05-28 19:26:34.240472 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-28 19:26:34.240477 | orchestrator | Wednesday 28 May 2025 19:17:20 +0000 (0:00:01.268) 0:04:28.838 ********* 2025-05-28 19:26:34.240486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.240491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.240496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.240501 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.240506 | orchestrator | 2025-05-28 19:26:34.240510 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-28 19:26:34.240515 | orchestrator | Wednesday 28 May 2025 19:17:21 +0000 (0:00:00.968) 0:04:29.806 ********* 2025-05-28 19:26:34.240520 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.240525 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.240530 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.240535 | orchestrator | 2025-05-28 19:26:34.240539 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-28 19:26:34.240544 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240549 | orchestrator | 2025-05-28 19:26:34.240554 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-28 19:26:34.240559 | orchestrator | Wednesday 28 May 2025 19:17:22 +0000 (0:00:00.836) 0:04:30.643 ********* 2025-05-28 19:26:34.240564 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.240568 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.240573 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.240578 | orchestrator | 2025-05-28 19:26:34.240583 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-28 19:26:34.240588 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240592 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.240597 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.240602 | orchestrator | 2025-05-28 19:26:34.240607 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-28 19:26:34.240612 | orchestrator | Wednesday 28 May 2025 19:17:23 +0000 (0:00:00.751) 0:04:31.395 ********* 2025-05-28 19:26:34.240617 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.240621 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.240626 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.240631 | orchestrator | 2025-05-28 19:26:34.240636 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-28 19:26:34.240641 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240646 | orchestrator | 2025-05-28 19:26:34.240650 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-28 19:26:34.240655 | orchestrator | Wednesday 28 May 2025 19:17:23 +0000 (0:00:00.877) 0:04:32.272 ********* 2025-05-28 19:26:34.240660 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.240665 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.240670 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.240675 | orchestrator | 2025-05-28 19:26:34.240679 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-28 19:26:34.240684 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240689 | orchestrator | 2025-05-28 19:26:34.240694 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-28 19:26:34.240699 | orchestrator | Wednesday 28 May 2025 19:17:24 +0000 (0:00:00.845) 0:04:33.118 ********* 2025-05-28 19:26:34.240704 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240708 | orchestrator | 2025-05-28 19:26:34.240745 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-28 19:26:34.240752 | orchestrator | Wednesday 28 May 2025 19:17:24 +0000 (0:00:00.145) 0:04:33.263 ********* 2025-05-28 19:26:34.240757 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.240765 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.240770 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.240775 | orchestrator | 2025-05-28 19:26:34.240779 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-28 19:26:34.240784 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240793 | orchestrator | 2025-05-28 19:26:34.240798 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-28 19:26:34.240803 | orchestrator | Wednesday 28 May 2025 19:17:25 +0000 (0:00:00.903) 0:04:34.167 ********* 2025-05-28 19:26:34.240808 | orchestrator | 2025-05-28 19:26:34.240813 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-28 19:26:34.240818 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.240828 | orchestrator | 2025-05-28 19:26:34.240833 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-28 19:26:34.240838 | orchestrator | Wednesday 28 May 2025 19:17:26 +0000 (0:00:00.862) 0:04:35.030 ********* 2025-05-28 19:26:34.240845 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.240850 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.240855 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.240860 | orchestrator | 2025-05-28 19:26:34.240865 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-28 19:26:34.240870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.240875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.240880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.240885 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240889 | orchestrator | 2025-05-28 19:26:34.240894 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-28 19:26:34.240899 | orchestrator | Wednesday 28 May 2025 19:17:27 +0000 (0:00:01.161) 0:04:36.191 ********* 2025-05-28 19:26:34.240904 | orchestrator | 2025-05-28 19:26:34.240909 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-28 19:26:34.240914 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.240919 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.240924 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.240929 | orchestrator | 2025-05-28 19:26:34.240934 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-28 19:26:34.240939 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.240944 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.240949 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.240953 | orchestrator | 2025-05-28 19:26:34.240958 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-28 19:26:34.240963 | orchestrator | Wednesday 28 May 2025 19:17:29 +0000 (0:00:01.517) 0:04:37.709 ********* 2025-05-28 19:26:34.240968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.240982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.240988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.240992 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.240997 | orchestrator | 2025-05-28 19:26:34.241002 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-28 19:26:34.241007 | orchestrator | Wednesday 28 May 2025 19:17:30 +0000 (0:00:01.140) 0:04:38.849 ********* 2025-05-28 19:26:34.241012 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.241017 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.241022 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.241027 | orchestrator | 2025-05-28 19:26:34.241032 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-28 19:26:34.241036 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.241041 | orchestrator | 2025-05-28 19:26:34.241046 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-28 19:26:34.241051 | orchestrator | Wednesday 28 May 2025 19:17:31 +0000 (0:00:01.095) 0:04:39.945 ********* 2025-05-28 19:26:34.241056 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.241065 | orchestrator | 2025-05-28 19:26:34.241070 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-28 19:26:34.241075 | orchestrator | Wednesday 28 May 2025 19:17:32 +0000 (0:00:00.547) 0:04:40.492 ********* 2025-05-28 19:26:34.241079 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241084 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241089 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241094 | orchestrator | 2025-05-28 19:26:34.241099 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-28 19:26:34.241104 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.241109 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.241114 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.241119 | orchestrator | 2025-05-28 19:26:34.241124 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-28 19:26:34.241129 | orchestrator | Wednesday 28 May 2025 19:17:33 +0000 (0:00:01.184) 0:04:41.677 ********* 2025-05-28 19:26:34.241133 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.241138 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.241143 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.241148 | orchestrator | 2025-05-28 19:26:34.241153 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.241158 | orchestrator | Wednesday 28 May 2025 19:17:34 +0000 (0:00:01.229) 0:04:42.906 ********* 2025-05-28 19:26:34.241163 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.241168 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.241173 | orchestrator | 2025-05-28 19:26:34.241177 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-28 19:26:34.241182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.241187 | orchestrator | 2025-05-28 19:26:34.241224 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.241230 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.241235 | orchestrator | 2025-05-28 19:26:34.241243 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-28 19:26:34.241248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.241253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.241258 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.241263 | orchestrator | 2025-05-28 19:26:34.241267 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-28 19:26:34.241272 | orchestrator | Wednesday 28 May 2025 19:17:36 +0000 (0:00:01.578) 0:04:44.485 ********* 2025-05-28 19:26:34.241277 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.241282 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.241287 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.241292 | orchestrator | 2025-05-28 19:26:34.241297 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-28 19:26:34.241302 | orchestrator | Wednesday 28 May 2025 19:17:37 +0000 (0:00:01.134) 0:04:45.619 ********* 2025-05-28 19:26:34.241306 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.241311 | orchestrator | 2025-05-28 19:26:34.241319 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-28 19:26:34.241324 | orchestrator | Wednesday 28 May 2025 19:17:38 +0000 (0:00:00.835) 0:04:46.454 ********* 2025-05-28 19:26:34.241329 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.241334 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.241339 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.241344 | orchestrator | 2025-05-28 19:26:34.241349 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-28 19:26:34.241353 | orchestrator | Wednesday 28 May 2025 19:17:38 +0000 (0:00:00.842) 0:04:47.297 ********* 2025-05-28 19:26:34.241358 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.241363 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.241371 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.241376 | orchestrator | 2025-05-28 19:26:34.241381 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-28 19:26:34.241386 | orchestrator | Wednesday 28 May 2025 19:17:40 +0000 (0:00:01.584) 0:04:48.882 ********* 2025-05-28 19:26:34.241391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.241396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.241401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.241406 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.241410 | orchestrator | 2025-05-28 19:26:34.241415 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-28 19:26:34.241420 | orchestrator | Wednesday 28 May 2025 19:17:41 +0000 (0:00:00.922) 0:04:49.804 ********* 2025-05-28 19:26:34.241425 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.241430 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.241435 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.241440 | orchestrator | 2025-05-28 19:26:34.241445 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-28 19:26:34.241450 | orchestrator | Wednesday 28 May 2025 19:17:41 +0000 (0:00:00.426) 0:04:50.231 ********* 2025-05-28 19:26:34.241455 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.241459 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.241464 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.241469 | orchestrator | 2025-05-28 19:26:34.241474 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-28 19:26:34.241479 | orchestrator | Wednesday 28 May 2025 19:17:42 +0000 (0:00:00.606) 0:04:50.837 ********* 2025-05-28 19:26:34.241484 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.241489 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.241493 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.241498 | orchestrator | 2025-05-28 19:26:34.241503 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-28 19:26:34.241508 | orchestrator | Wednesday 28 May 2025 19:17:42 +0000 (0:00:00.352) 0:04:51.189 ********* 2025-05-28 19:26:34.241513 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.241518 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.241522 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.241527 | orchestrator | 2025-05-28 19:26:34.241532 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.241537 | orchestrator | Wednesday 28 May 2025 19:17:43 +0000 (0:00:00.431) 0:04:51.620 ********* 2025-05-28 19:26:34.241542 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.241547 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.241552 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.241557 | orchestrator | 2025-05-28 19:26:34.241562 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-28 19:26:34.241566 | orchestrator | 2025-05-28 19:26:34.241571 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-28 19:26:34.241576 | orchestrator | Wednesday 28 May 2025 19:17:45 +0000 (0:00:02.660) 0:04:54.281 ********* 2025-05-28 19:26:34.241581 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.241586 | orchestrator | 2025-05-28 19:26:34.241598 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-28 19:26:34.241603 | orchestrator | Wednesday 28 May 2025 19:17:46 +0000 (0:00:00.569) 0:04:54.851 ********* 2025-05-28 19:26:34.241608 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.241613 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.241618 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.241623 | orchestrator | 2025-05-28 19:26:34.241628 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-28 19:26:34.241633 | orchestrator | Wednesday 28 May 2025 19:17:47 +0000 (0:00:00.734) 0:04:55.586 ********* 2025-05-28 19:26:34.241641 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241646 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241681 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241688 | orchestrator | 2025-05-28 19:26:34.241693 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-28 19:26:34.241701 | orchestrator | Wednesday 28 May 2025 19:17:47 +0000 (0:00:00.573) 0:04:56.159 ********* 2025-05-28 19:26:34.241706 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241711 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241716 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241721 | orchestrator | 2025-05-28 19:26:34.241726 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-28 19:26:34.241731 | orchestrator | Wednesday 28 May 2025 19:17:48 +0000 (0:00:00.339) 0:04:56.499 ********* 2025-05-28 19:26:34.241736 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241741 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241746 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241751 | orchestrator | 2025-05-28 19:26:34.241756 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-28 19:26:34.241761 | orchestrator | Wednesday 28 May 2025 19:17:48 +0000 (0:00:00.356) 0:04:56.856 ********* 2025-05-28 19:26:34.241766 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.241771 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.241775 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.241780 | orchestrator | 2025-05-28 19:26:34.241785 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-28 19:26:34.241793 | orchestrator | Wednesday 28 May 2025 19:17:49 +0000 (0:00:00.798) 0:04:57.654 ********* 2025-05-28 19:26:34.241798 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241803 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241808 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241813 | orchestrator | 2025-05-28 19:26:34.241817 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-28 19:26:34.241823 | orchestrator | Wednesday 28 May 2025 19:17:49 +0000 (0:00:00.582) 0:04:58.236 ********* 2025-05-28 19:26:34.241827 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241832 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241837 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241842 | orchestrator | 2025-05-28 19:26:34.241847 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-28 19:26:34.241852 | orchestrator | Wednesday 28 May 2025 19:17:50 +0000 (0:00:00.343) 0:04:58.579 ********* 2025-05-28 19:26:34.241857 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241862 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241867 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241872 | orchestrator | 2025-05-28 19:26:34.241877 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-28 19:26:34.241882 | orchestrator | Wednesday 28 May 2025 19:17:50 +0000 (0:00:00.400) 0:04:58.979 ********* 2025-05-28 19:26:34.241887 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241891 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241896 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241901 | orchestrator | 2025-05-28 19:26:34.241906 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-28 19:26:34.241911 | orchestrator | Wednesday 28 May 2025 19:17:50 +0000 (0:00:00.320) 0:04:59.300 ********* 2025-05-28 19:26:34.241916 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.241921 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.241926 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.241931 | orchestrator | 2025-05-28 19:26:34.241936 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-28 19:26:34.241941 | orchestrator | Wednesday 28 May 2025 19:17:51 +0000 (0:00:00.586) 0:04:59.886 ********* 2025-05-28 19:26:34.241950 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.241955 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.241960 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.241965 | orchestrator | 2025-05-28 19:26:34.241993 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-28 19:26:34.241999 | orchestrator | Wednesday 28 May 2025 19:17:52 +0000 (0:00:00.744) 0:05:00.630 ********* 2025-05-28 19:26:34.242004 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242009 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242027 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242033 | orchestrator | 2025-05-28 19:26:34.242038 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-28 19:26:34.242043 | orchestrator | Wednesday 28 May 2025 19:17:52 +0000 (0:00:00.339) 0:05:00.970 ********* 2025-05-28 19:26:34.242048 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.242053 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.242058 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.242063 | orchestrator | 2025-05-28 19:26:34.242068 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-28 19:26:34.242073 | orchestrator | Wednesday 28 May 2025 19:17:52 +0000 (0:00:00.358) 0:05:01.329 ********* 2025-05-28 19:26:34.242078 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242083 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242088 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242093 | orchestrator | 2025-05-28 19:26:34.242097 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-28 19:26:34.242102 | orchestrator | Wednesday 28 May 2025 19:17:53 +0000 (0:00:00.601) 0:05:01.930 ********* 2025-05-28 19:26:34.242107 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242112 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242117 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242122 | orchestrator | 2025-05-28 19:26:34.242127 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-28 19:26:34.242132 | orchestrator | Wednesday 28 May 2025 19:17:53 +0000 (0:00:00.341) 0:05:02.272 ********* 2025-05-28 19:26:34.242137 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242142 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242147 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242152 | orchestrator | 2025-05-28 19:26:34.242157 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-28 19:26:34.242161 | orchestrator | Wednesday 28 May 2025 19:17:54 +0000 (0:00:00.345) 0:05:02.617 ********* 2025-05-28 19:26:34.242166 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242171 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242210 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242217 | orchestrator | 2025-05-28 19:26:34.242222 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-28 19:26:34.242227 | orchestrator | Wednesday 28 May 2025 19:17:54 +0000 (0:00:00.348) 0:05:02.965 ********* 2025-05-28 19:26:34.242235 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242240 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242245 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242250 | orchestrator | 2025-05-28 19:26:34.242255 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-28 19:26:34.242260 | orchestrator | Wednesday 28 May 2025 19:17:55 +0000 (0:00:00.635) 0:05:03.601 ********* 2025-05-28 19:26:34.242265 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.242270 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.242275 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.242279 | orchestrator | 2025-05-28 19:26:34.242284 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-28 19:26:34.242289 | orchestrator | Wednesday 28 May 2025 19:17:55 +0000 (0:00:00.403) 0:05:04.005 ********* 2025-05-28 19:26:34.242294 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.242299 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.242308 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.242313 | orchestrator | 2025-05-28 19:26:34.242318 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.242323 | orchestrator | Wednesday 28 May 2025 19:17:56 +0000 (0:00:00.365) 0:05:04.370 ********* 2025-05-28 19:26:34.242331 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242336 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242341 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242346 | orchestrator | 2025-05-28 19:26:34.242351 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.242356 | orchestrator | Wednesday 28 May 2025 19:17:56 +0000 (0:00:00.362) 0:05:04.733 ********* 2025-05-28 19:26:34.242361 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242366 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242371 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242375 | orchestrator | 2025-05-28 19:26:34.242380 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.242385 | orchestrator | Wednesday 28 May 2025 19:17:57 +0000 (0:00:00.677) 0:05:05.410 ********* 2025-05-28 19:26:34.242390 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242395 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242400 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242405 | orchestrator | 2025-05-28 19:26:34.242410 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.242415 | orchestrator | Wednesday 28 May 2025 19:17:57 +0000 (0:00:00.409) 0:05:05.819 ********* 2025-05-28 19:26:34.242420 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242425 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242429 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242434 | orchestrator | 2025-05-28 19:26:34.242439 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.242444 | orchestrator | Wednesday 28 May 2025 19:17:57 +0000 (0:00:00.390) 0:05:06.210 ********* 2025-05-28 19:26:34.242449 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242454 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242459 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242464 | orchestrator | 2025-05-28 19:26:34.242469 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.242474 | orchestrator | Wednesday 28 May 2025 19:17:58 +0000 (0:00:00.429) 0:05:06.639 ********* 2025-05-28 19:26:34.242479 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242484 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242489 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242493 | orchestrator | 2025-05-28 19:26:34.242498 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.242503 | orchestrator | Wednesday 28 May 2025 19:17:58 +0000 (0:00:00.656) 0:05:07.296 ********* 2025-05-28 19:26:34.242508 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242513 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242518 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242523 | orchestrator | 2025-05-28 19:26:34.242528 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.242533 | orchestrator | Wednesday 28 May 2025 19:17:59 +0000 (0:00:00.431) 0:05:07.728 ********* 2025-05-28 19:26:34.242538 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242542 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242547 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242551 | orchestrator | 2025-05-28 19:26:34.242556 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.242561 | orchestrator | Wednesday 28 May 2025 19:17:59 +0000 (0:00:00.396) 0:05:08.124 ********* 2025-05-28 19:26:34.242566 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242570 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242577 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242582 | orchestrator | 2025-05-28 19:26:34.242587 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.242591 | orchestrator | Wednesday 28 May 2025 19:18:00 +0000 (0:00:00.371) 0:05:08.496 ********* 2025-05-28 19:26:34.242596 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242601 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242605 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242610 | orchestrator | 2025-05-28 19:26:34.242615 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.242619 | orchestrator | Wednesday 28 May 2025 19:18:00 +0000 (0:00:00.780) 0:05:09.277 ********* 2025-05-28 19:26:34.242624 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242629 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242633 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242638 | orchestrator | 2025-05-28 19:26:34.242643 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.242675 | orchestrator | Wednesday 28 May 2025 19:18:01 +0000 (0:00:00.444) 0:05:09.721 ********* 2025-05-28 19:26:34.242682 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242686 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242691 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242698 | orchestrator | 2025-05-28 19:26:34.242703 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.242708 | orchestrator | Wednesday 28 May 2025 19:18:01 +0000 (0:00:00.374) 0:05:10.095 ********* 2025-05-28 19:26:34.242713 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.242717 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.242722 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242726 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.242731 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.242736 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242740 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.242745 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.242750 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242754 | orchestrator | 2025-05-28 19:26:34.242759 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.242764 | orchestrator | Wednesday 28 May 2025 19:18:02 +0000 (0:00:00.413) 0:05:10.509 ********* 2025-05-28 19:26:34.242768 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-28 19:26:34.242776 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-28 19:26:34.242780 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-28 19:26:34.242785 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-28 19:26:34.242790 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242794 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242799 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-28 19:26:34.242803 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-28 19:26:34.242815 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242819 | orchestrator | 2025-05-28 19:26:34.242824 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.242834 | orchestrator | Wednesday 28 May 2025 19:18:02 +0000 (0:00:00.686) 0:05:11.195 ********* 2025-05-28 19:26:34.242839 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242843 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242848 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242853 | orchestrator | 2025-05-28 19:26:34.242857 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.242862 | orchestrator | Wednesday 28 May 2025 19:18:03 +0000 (0:00:00.382) 0:05:11.578 ********* 2025-05-28 19:26:34.242870 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242875 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242879 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242884 | orchestrator | 2025-05-28 19:26:34.242889 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.242893 | orchestrator | Wednesday 28 May 2025 19:18:03 +0000 (0:00:00.389) 0:05:11.967 ********* 2025-05-28 19:26:34.242898 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242902 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242907 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242912 | orchestrator | 2025-05-28 19:26:34.242916 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.242921 | orchestrator | Wednesday 28 May 2025 19:18:03 +0000 (0:00:00.335) 0:05:12.303 ********* 2025-05-28 19:26:34.242926 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242930 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242935 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242939 | orchestrator | 2025-05-28 19:26:34.242944 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.242949 | orchestrator | Wednesday 28 May 2025 19:18:04 +0000 (0:00:00.666) 0:05:12.969 ********* 2025-05-28 19:26:34.242953 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242958 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242963 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.242967 | orchestrator | 2025-05-28 19:26:34.242980 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.242985 | orchestrator | Wednesday 28 May 2025 19:18:05 +0000 (0:00:00.375) 0:05:13.345 ********* 2025-05-28 19:26:34.242990 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.242994 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.242999 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243004 | orchestrator | 2025-05-28 19:26:34.243008 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.243013 | orchestrator | Wednesday 28 May 2025 19:18:05 +0000 (0:00:00.399) 0:05:13.744 ********* 2025-05-28 19:26:34.243017 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.243022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.243027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.243031 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243036 | orchestrator | 2025-05-28 19:26:34.243040 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.243045 | orchestrator | Wednesday 28 May 2025 19:18:05 +0000 (0:00:00.422) 0:05:14.166 ********* 2025-05-28 19:26:34.243050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.243054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.243059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.243064 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243068 | orchestrator | 2025-05-28 19:26:34.243073 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.243077 | orchestrator | Wednesday 28 May 2025 19:18:06 +0000 (0:00:00.486) 0:05:14.653 ********* 2025-05-28 19:26:34.243110 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.243117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.243122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.243126 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243134 | orchestrator | 2025-05-28 19:26:34.243138 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.243143 | orchestrator | Wednesday 28 May 2025 19:18:07 +0000 (0:00:00.779) 0:05:15.432 ********* 2025-05-28 19:26:34.243151 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243156 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243160 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243165 | orchestrator | 2025-05-28 19:26:34.243169 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.243174 | orchestrator | Wednesday 28 May 2025 19:18:07 +0000 (0:00:00.682) 0:05:16.115 ********* 2025-05-28 19:26:34.243179 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.243183 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243188 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.243193 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243197 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.243202 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243206 | orchestrator | 2025-05-28 19:26:34.243214 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.243219 | orchestrator | Wednesday 28 May 2025 19:18:08 +0000 (0:00:00.632) 0:05:16.747 ********* 2025-05-28 19:26:34.243223 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243228 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243232 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243237 | orchestrator | 2025-05-28 19:26:34.243242 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.243246 | orchestrator | Wednesday 28 May 2025 19:18:08 +0000 (0:00:00.384) 0:05:17.132 ********* 2025-05-28 19:26:34.243251 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243256 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243260 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243265 | orchestrator | 2025-05-28 19:26:34.243270 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.243274 | orchestrator | Wednesday 28 May 2025 19:18:09 +0000 (0:00:00.798) 0:05:17.930 ********* 2025-05-28 19:26:34.243279 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.243284 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243288 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.243293 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243298 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.243302 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243307 | orchestrator | 2025-05-28 19:26:34.243312 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.243316 | orchestrator | Wednesday 28 May 2025 19:18:10 +0000 (0:00:00.529) 0:05:18.460 ********* 2025-05-28 19:26:34.243321 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243326 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243330 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243335 | orchestrator | 2025-05-28 19:26:34.243339 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.243344 | orchestrator | Wednesday 28 May 2025 19:18:10 +0000 (0:00:00.452) 0:05:18.912 ********* 2025-05-28 19:26:34.243349 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.243354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.243358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.243363 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-28 19:26:34.243368 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-28 19:26:34.243372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-28 19:26:34.243377 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243382 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243386 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-28 19:26:34.243391 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-28 19:26:34.243398 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-28 19:26:34.243403 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243408 | orchestrator | 2025-05-28 19:26:34.243412 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.243417 | orchestrator | Wednesday 28 May 2025 19:18:11 +0000 (0:00:01.063) 0:05:19.976 ********* 2025-05-28 19:26:34.243422 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243426 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243431 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243436 | orchestrator | 2025-05-28 19:26:34.243440 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-28 19:26:34.243445 | orchestrator | Wednesday 28 May 2025 19:18:12 +0000 (0:00:00.576) 0:05:20.552 ********* 2025-05-28 19:26:34.243450 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243455 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243459 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243464 | orchestrator | 2025-05-28 19:26:34.243469 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-28 19:26:34.243473 | orchestrator | Wednesday 28 May 2025 19:18:13 +0000 (0:00:00.797) 0:05:21.349 ********* 2025-05-28 19:26:34.243478 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243483 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243487 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243492 | orchestrator | 2025-05-28 19:26:34.243497 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-28 19:26:34.243501 | orchestrator | Wednesday 28 May 2025 19:18:13 +0000 (0:00:00.589) 0:05:21.938 ********* 2025-05-28 19:26:34.243506 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243511 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243528 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243534 | orchestrator | 2025-05-28 19:26:34.243538 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-28 19:26:34.243543 | orchestrator | Wednesday 28 May 2025 19:18:14 +0000 (0:00:00.872) 0:05:22.811 ********* 2025-05-28 19:26:34.243548 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.243552 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.243557 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.243562 | orchestrator | 2025-05-28 19:26:34.243566 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-28 19:26:34.243571 | orchestrator | Wednesday 28 May 2025 19:18:14 +0000 (0:00:00.362) 0:05:23.174 ********* 2025-05-28 19:26:34.243576 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.243580 | orchestrator | 2025-05-28 19:26:34.243585 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-28 19:26:34.243590 | orchestrator | Wednesday 28 May 2025 19:18:15 +0000 (0:00:00.641) 0:05:23.816 ********* 2025-05-28 19:26:34.243594 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243599 | orchestrator | 2025-05-28 19:26:34.243603 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-28 19:26:34.243608 | orchestrator | Wednesday 28 May 2025 19:18:15 +0000 (0:00:00.165) 0:05:23.982 ********* 2025-05-28 19:26:34.243615 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-28 19:26:34.243620 | orchestrator | 2025-05-28 19:26:34.243625 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-28 19:26:34.243629 | orchestrator | Wednesday 28 May 2025 19:18:16 +0000 (0:00:00.993) 0:05:24.975 ********* 2025-05-28 19:26:34.243634 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.243638 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.243643 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.243648 | orchestrator | 2025-05-28 19:26:34.243652 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-28 19:26:34.243657 | orchestrator | Wednesday 28 May 2025 19:18:17 +0000 (0:00:00.367) 0:05:25.343 ********* 2025-05-28 19:26:34.243665 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.243669 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.243674 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.243678 | orchestrator | 2025-05-28 19:26:34.243683 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-28 19:26:34.243688 | orchestrator | Wednesday 28 May 2025 19:18:17 +0000 (0:00:00.405) 0:05:25.749 ********* 2025-05-28 19:26:34.243692 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.243697 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.243702 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.243706 | orchestrator | 2025-05-28 19:26:34.243711 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-28 19:26:34.243715 | orchestrator | Wednesday 28 May 2025 19:18:18 +0000 (0:00:01.244) 0:05:26.993 ********* 2025-05-28 19:26:34.243720 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.243725 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.243729 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.243734 | orchestrator | 2025-05-28 19:26:34.243739 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-28 19:26:34.243743 | orchestrator | Wednesday 28 May 2025 19:18:19 +0000 (0:00:01.220) 0:05:28.214 ********* 2025-05-28 19:26:34.243748 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.243752 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.243757 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.243762 | orchestrator | 2025-05-28 19:26:34.243766 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-28 19:26:34.243771 | orchestrator | Wednesday 28 May 2025 19:18:20 +0000 (0:00:00.747) 0:05:28.961 ********* 2025-05-28 19:26:34.243775 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.243780 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.243785 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.243789 | orchestrator | 2025-05-28 19:26:34.243794 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-28 19:26:34.243798 | orchestrator | Wednesday 28 May 2025 19:18:21 +0000 (0:00:00.704) 0:05:29.666 ********* 2025-05-28 19:26:34.243803 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243808 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243812 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243817 | orchestrator | 2025-05-28 19:26:34.243821 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-28 19:26:34.243826 | orchestrator | Wednesday 28 May 2025 19:18:21 +0000 (0:00:00.393) 0:05:30.060 ********* 2025-05-28 19:26:34.243831 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.243835 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.243840 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.243844 | orchestrator | 2025-05-28 19:26:34.243849 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-28 19:26:34.243854 | orchestrator | Wednesday 28 May 2025 19:18:22 +0000 (0:00:00.625) 0:05:30.685 ********* 2025-05-28 19:26:34.243858 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243863 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243867 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243872 | orchestrator | 2025-05-28 19:26:34.243876 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-28 19:26:34.243881 | orchestrator | Wednesday 28 May 2025 19:18:22 +0000 (0:00:00.374) 0:05:31.060 ********* 2025-05-28 19:26:34.243886 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.243890 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.243895 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.243899 | orchestrator | 2025-05-28 19:26:34.243904 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-28 19:26:34.243909 | orchestrator | Wednesday 28 May 2025 19:18:23 +0000 (0:00:00.325) 0:05:31.386 ********* 2025-05-28 19:26:34.243913 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.243921 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.243925 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.243930 | orchestrator | 2025-05-28 19:26:34.243934 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-28 19:26:34.243951 | orchestrator | Wednesday 28 May 2025 19:18:24 +0000 (0:00:01.339) 0:05:32.725 ********* 2025-05-28 19:26:34.243956 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.243961 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.243965 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.243993 | orchestrator | 2025-05-28 19:26:34.243998 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-28 19:26:34.244003 | orchestrator | Wednesday 28 May 2025 19:18:25 +0000 (0:00:00.677) 0:05:33.403 ********* 2025-05-28 19:26:34.244008 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.244012 | orchestrator | 2025-05-28 19:26:34.244017 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-28 19:26:34.244022 | orchestrator | Wednesday 28 May 2025 19:18:25 +0000 (0:00:00.590) 0:05:33.993 ********* 2025-05-28 19:26:34.244026 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244031 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244035 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244040 | orchestrator | 2025-05-28 19:26:34.244045 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-28 19:26:34.244049 | orchestrator | Wednesday 28 May 2025 19:18:25 +0000 (0:00:00.337) 0:05:34.331 ********* 2025-05-28 19:26:34.244054 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244058 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244063 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244068 | orchestrator | 2025-05-28 19:26:34.244075 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-28 19:26:34.244080 | orchestrator | Wednesday 28 May 2025 19:18:26 +0000 (0:00:00.597) 0:05:34.928 ********* 2025-05-28 19:26:34.244084 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.244089 | orchestrator | 2025-05-28 19:26:34.244093 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-28 19:26:34.244098 | orchestrator | Wednesday 28 May 2025 19:18:27 +0000 (0:00:00.675) 0:05:35.604 ********* 2025-05-28 19:26:34.244103 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.244107 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.244112 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.244116 | orchestrator | 2025-05-28 19:26:34.244121 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-28 19:26:34.244126 | orchestrator | Wednesday 28 May 2025 19:18:28 +0000 (0:00:01.493) 0:05:37.098 ********* 2025-05-28 19:26:34.244130 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.244135 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.244139 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.244144 | orchestrator | 2025-05-28 19:26:34.244149 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-28 19:26:34.244153 | orchestrator | Wednesday 28 May 2025 19:18:29 +0000 (0:00:01.188) 0:05:38.286 ********* 2025-05-28 19:26:34.244158 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.244162 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.244167 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.244171 | orchestrator | 2025-05-28 19:26:34.244176 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-28 19:26:34.244181 | orchestrator | Wednesday 28 May 2025 19:18:31 +0000 (0:00:01.787) 0:05:40.073 ********* 2025-05-28 19:26:34.244185 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.244190 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.244194 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.244199 | orchestrator | 2025-05-28 19:26:34.244207 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-28 19:26:34.244211 | orchestrator | Wednesday 28 May 2025 19:18:33 +0000 (0:00:02.086) 0:05:42.159 ********* 2025-05-28 19:26:34.244216 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.244220 | orchestrator | 2025-05-28 19:26:34.244225 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-28 19:26:34.244229 | orchestrator | Wednesday 28 May 2025 19:18:34 +0000 (0:00:00.625) 0:05:42.785 ********* 2025-05-28 19:26:34.244234 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-28 19:26:34.244239 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244243 | orchestrator | 2025-05-28 19:26:34.244248 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-28 19:26:34.244253 | orchestrator | Wednesday 28 May 2025 19:18:55 +0000 (0:00:21.436) 0:06:04.222 ********* 2025-05-28 19:26:34.244257 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244262 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.244266 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.244271 | orchestrator | 2025-05-28 19:26:34.244275 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-28 19:26:34.244280 | orchestrator | Wednesday 28 May 2025 19:19:03 +0000 (0:00:08.014) 0:06:12.236 ********* 2025-05-28 19:26:34.244284 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244289 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244294 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244298 | orchestrator | 2025-05-28 19:26:34.244303 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-28 19:26:34.244307 | orchestrator | Wednesday 28 May 2025 19:19:05 +0000 (0:00:01.212) 0:06:13.449 ********* 2025-05-28 19:26:34.244312 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.244317 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.244321 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.244326 | orchestrator | 2025-05-28 19:26:34.244330 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-28 19:26:34.244335 | orchestrator | Wednesday 28 May 2025 19:19:05 +0000 (0:00:00.701) 0:06:14.151 ********* 2025-05-28 19:26:34.244339 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.244344 | orchestrator | 2025-05-28 19:26:34.244349 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-28 19:26:34.244367 | orchestrator | Wednesday 28 May 2025 19:19:06 +0000 (0:00:00.784) 0:06:14.935 ********* 2025-05-28 19:26:34.244372 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244377 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.244381 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.244386 | orchestrator | 2025-05-28 19:26:34.244391 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-28 19:26:34.244395 | orchestrator | Wednesday 28 May 2025 19:19:06 +0000 (0:00:00.371) 0:06:15.306 ********* 2025-05-28 19:26:34.244400 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.244405 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.244409 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.244414 | orchestrator | 2025-05-28 19:26:34.244418 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-28 19:26:34.244423 | orchestrator | Wednesday 28 May 2025 19:19:08 +0000 (0:00:01.458) 0:06:16.764 ********* 2025-05-28 19:26:34.244428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.244432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.244437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.244441 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244446 | orchestrator | 2025-05-28 19:26:34.244451 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-28 19:26:34.244462 | orchestrator | Wednesday 28 May 2025 19:19:09 +0000 (0:00:00.684) 0:06:17.449 ********* 2025-05-28 19:26:34.244467 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244471 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.244476 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.244481 | orchestrator | 2025-05-28 19:26:34.244485 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.244490 | orchestrator | Wednesday 28 May 2025 19:19:09 +0000 (0:00:00.371) 0:06:17.821 ********* 2025-05-28 19:26:34.244494 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.244499 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.244504 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.244508 | orchestrator | 2025-05-28 19:26:34.244513 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-28 19:26:34.244517 | orchestrator | 2025-05-28 19:26:34.244522 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-28 19:26:34.244527 | orchestrator | Wednesday 28 May 2025 19:19:11 +0000 (0:00:02.202) 0:06:20.024 ********* 2025-05-28 19:26:34.244531 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.244535 | orchestrator | 2025-05-28 19:26:34.244540 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-28 19:26:34.244544 | orchestrator | Wednesday 28 May 2025 19:19:12 +0000 (0:00:00.842) 0:06:20.867 ********* 2025-05-28 19:26:34.244548 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244552 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.244556 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.244560 | orchestrator | 2025-05-28 19:26:34.244564 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-28 19:26:34.244569 | orchestrator | Wednesday 28 May 2025 19:19:13 +0000 (0:00:00.779) 0:06:21.646 ********* 2025-05-28 19:26:34.244573 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244577 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244581 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244585 | orchestrator | 2025-05-28 19:26:34.244589 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-28 19:26:34.244594 | orchestrator | Wednesday 28 May 2025 19:19:13 +0000 (0:00:00.323) 0:06:21.970 ********* 2025-05-28 19:26:34.244598 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244602 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244606 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244610 | orchestrator | 2025-05-28 19:26:34.244614 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-28 19:26:34.244619 | orchestrator | Wednesday 28 May 2025 19:19:14 +0000 (0:00:00.595) 0:06:22.565 ********* 2025-05-28 19:26:34.244623 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244627 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244631 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244635 | orchestrator | 2025-05-28 19:26:34.244639 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-28 19:26:34.244644 | orchestrator | Wednesday 28 May 2025 19:19:14 +0000 (0:00:00.329) 0:06:22.894 ********* 2025-05-28 19:26:34.244648 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244652 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.244656 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.244660 | orchestrator | 2025-05-28 19:26:34.244664 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-28 19:26:34.244668 | orchestrator | Wednesday 28 May 2025 19:19:15 +0000 (0:00:00.742) 0:06:23.637 ********* 2025-05-28 19:26:34.244673 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244677 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244681 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244685 | orchestrator | 2025-05-28 19:26:34.244691 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-28 19:26:34.244696 | orchestrator | Wednesday 28 May 2025 19:19:15 +0000 (0:00:00.447) 0:06:24.084 ********* 2025-05-28 19:26:34.244700 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244704 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244708 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244712 | orchestrator | 2025-05-28 19:26:34.244717 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-28 19:26:34.244721 | orchestrator | Wednesday 28 May 2025 19:19:16 +0000 (0:00:00.714) 0:06:24.799 ********* 2025-05-28 19:26:34.244725 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244729 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244733 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244737 | orchestrator | 2025-05-28 19:26:34.244741 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-28 19:26:34.244756 | orchestrator | Wednesday 28 May 2025 19:19:16 +0000 (0:00:00.352) 0:06:25.151 ********* 2025-05-28 19:26:34.244761 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244765 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244769 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244774 | orchestrator | 2025-05-28 19:26:34.244778 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-28 19:26:34.244782 | orchestrator | Wednesday 28 May 2025 19:19:17 +0000 (0:00:00.323) 0:06:25.474 ********* 2025-05-28 19:26:34.244786 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244790 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244794 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244799 | orchestrator | 2025-05-28 19:26:34.244803 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-28 19:26:34.244807 | orchestrator | Wednesday 28 May 2025 19:19:17 +0000 (0:00:00.344) 0:06:25.819 ********* 2025-05-28 19:26:34.244811 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244815 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.244819 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.244823 | orchestrator | 2025-05-28 19:26:34.244828 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-28 19:26:34.244832 | orchestrator | Wednesday 28 May 2025 19:19:18 +0000 (0:00:01.102) 0:06:26.922 ********* 2025-05-28 19:26:34.244836 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244840 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244844 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244848 | orchestrator | 2025-05-28 19:26:34.244855 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-28 19:26:34.244859 | orchestrator | Wednesday 28 May 2025 19:19:18 +0000 (0:00:00.330) 0:06:27.253 ********* 2025-05-28 19:26:34.244863 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.244867 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.244872 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.244876 | orchestrator | 2025-05-28 19:26:34.244880 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-28 19:26:34.244884 | orchestrator | Wednesday 28 May 2025 19:19:19 +0000 (0:00:00.388) 0:06:27.641 ********* 2025-05-28 19:26:34.244888 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244892 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244896 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244901 | orchestrator | 2025-05-28 19:26:34.244905 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-28 19:26:34.244909 | orchestrator | Wednesday 28 May 2025 19:19:19 +0000 (0:00:00.337) 0:06:27.979 ********* 2025-05-28 19:26:34.244913 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244917 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244921 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244925 | orchestrator | 2025-05-28 19:26:34.244930 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-28 19:26:34.244936 | orchestrator | Wednesday 28 May 2025 19:19:20 +0000 (0:00:00.656) 0:06:28.636 ********* 2025-05-28 19:26:34.244941 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244945 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244949 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244953 | orchestrator | 2025-05-28 19:26:34.244957 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-28 19:26:34.244961 | orchestrator | Wednesday 28 May 2025 19:19:20 +0000 (0:00:00.358) 0:06:28.994 ********* 2025-05-28 19:26:34.244966 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.244978 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.244982 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.244986 | orchestrator | 2025-05-28 19:26:34.244991 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-28 19:26:34.244995 | orchestrator | Wednesday 28 May 2025 19:19:20 +0000 (0:00:00.321) 0:06:29.316 ********* 2025-05-28 19:26:34.244999 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245003 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245007 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245011 | orchestrator | 2025-05-28 19:26:34.245016 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-28 19:26:34.245020 | orchestrator | Wednesday 28 May 2025 19:19:21 +0000 (0:00:00.351) 0:06:29.668 ********* 2025-05-28 19:26:34.245024 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.245028 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.245032 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.245036 | orchestrator | 2025-05-28 19:26:34.245041 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-28 19:26:34.245045 | orchestrator | Wednesday 28 May 2025 19:19:22 +0000 (0:00:00.680) 0:06:30.349 ********* 2025-05-28 19:26:34.245049 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.245053 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.245057 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.245061 | orchestrator | 2025-05-28 19:26:34.245065 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.245070 | orchestrator | Wednesday 28 May 2025 19:19:22 +0000 (0:00:00.363) 0:06:30.712 ********* 2025-05-28 19:26:34.245074 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245078 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245082 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245086 | orchestrator | 2025-05-28 19:26:34.245090 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.245094 | orchestrator | Wednesday 28 May 2025 19:19:22 +0000 (0:00:00.363) 0:06:31.076 ********* 2025-05-28 19:26:34.245099 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245103 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245107 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245111 | orchestrator | 2025-05-28 19:26:34.245115 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.245119 | orchestrator | Wednesday 28 May 2025 19:19:23 +0000 (0:00:00.372) 0:06:31.449 ********* 2025-05-28 19:26:34.245123 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245128 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245132 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245136 | orchestrator | 2025-05-28 19:26:34.245140 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.245144 | orchestrator | Wednesday 28 May 2025 19:19:23 +0000 (0:00:00.679) 0:06:32.129 ********* 2025-05-28 19:26:34.245161 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245166 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245170 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245174 | orchestrator | 2025-05-28 19:26:34.245178 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.245183 | orchestrator | Wednesday 28 May 2025 19:19:24 +0000 (0:00:00.361) 0:06:32.490 ********* 2025-05-28 19:26:34.245190 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245194 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245198 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245202 | orchestrator | 2025-05-28 19:26:34.245207 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.245211 | orchestrator | Wednesday 28 May 2025 19:19:24 +0000 (0:00:00.359) 0:06:32.850 ********* 2025-05-28 19:26:34.245215 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245219 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245223 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245228 | orchestrator | 2025-05-28 19:26:34.245232 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.245236 | orchestrator | Wednesday 28 May 2025 19:19:24 +0000 (0:00:00.327) 0:06:33.177 ********* 2025-05-28 19:26:34.245240 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245244 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245248 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245253 | orchestrator | 2025-05-28 19:26:34.245259 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.245263 | orchestrator | Wednesday 28 May 2025 19:19:25 +0000 (0:00:00.699) 0:06:33.877 ********* 2025-05-28 19:26:34.245268 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245272 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245276 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245280 | orchestrator | 2025-05-28 19:26:34.245284 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.245288 | orchestrator | Wednesday 28 May 2025 19:19:25 +0000 (0:00:00.389) 0:06:34.266 ********* 2025-05-28 19:26:34.245293 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245297 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245301 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245305 | orchestrator | 2025-05-28 19:26:34.245309 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.245313 | orchestrator | Wednesday 28 May 2025 19:19:26 +0000 (0:00:00.365) 0:06:34.632 ********* 2025-05-28 19:26:34.245318 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245322 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245326 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245330 | orchestrator | 2025-05-28 19:26:34.245334 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.245338 | orchestrator | Wednesday 28 May 2025 19:19:26 +0000 (0:00:00.658) 0:06:35.291 ********* 2025-05-28 19:26:34.245343 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245347 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245351 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245355 | orchestrator | 2025-05-28 19:26:34.245359 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.245363 | orchestrator | Wednesday 28 May 2025 19:19:27 +0000 (0:00:00.348) 0:06:35.640 ********* 2025-05-28 19:26:34.245368 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245372 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245376 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245380 | orchestrator | 2025-05-28 19:26:34.245384 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.245389 | orchestrator | Wednesday 28 May 2025 19:19:27 +0000 (0:00:00.400) 0:06:36.040 ********* 2025-05-28 19:26:34.245393 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.245397 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.245401 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245405 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.245409 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.245416 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245420 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.245425 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.245429 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245433 | orchestrator | 2025-05-28 19:26:34.245437 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.245441 | orchestrator | Wednesday 28 May 2025 19:19:28 +0000 (0:00:00.399) 0:06:36.440 ********* 2025-05-28 19:26:34.245445 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-28 19:26:34.245450 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-28 19:26:34.245454 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245458 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-28 19:26:34.245462 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-28 19:26:34.245466 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245470 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-28 19:26:34.245475 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-28 19:26:34.245479 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245483 | orchestrator | 2025-05-28 19:26:34.245487 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.245491 | orchestrator | Wednesday 28 May 2025 19:19:28 +0000 (0:00:00.665) 0:06:37.105 ********* 2025-05-28 19:26:34.245495 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245499 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245504 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245508 | orchestrator | 2025-05-28 19:26:34.245512 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.245528 | orchestrator | Wednesday 28 May 2025 19:19:29 +0000 (0:00:00.396) 0:06:37.502 ********* 2025-05-28 19:26:34.245533 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245537 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245541 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245545 | orchestrator | 2025-05-28 19:26:34.245550 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.245554 | orchestrator | Wednesday 28 May 2025 19:19:29 +0000 (0:00:00.436) 0:06:37.939 ********* 2025-05-28 19:26:34.245558 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245562 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245566 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245570 | orchestrator | 2025-05-28 19:26:34.245574 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.245579 | orchestrator | Wednesday 28 May 2025 19:19:30 +0000 (0:00:00.401) 0:06:38.340 ********* 2025-05-28 19:26:34.245583 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245587 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245591 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245595 | orchestrator | 2025-05-28 19:26:34.245599 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.245604 | orchestrator | Wednesday 28 May 2025 19:19:30 +0000 (0:00:00.702) 0:06:39.043 ********* 2025-05-28 19:26:34.245608 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245614 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245618 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245622 | orchestrator | 2025-05-28 19:26:34.245626 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.245631 | orchestrator | Wednesday 28 May 2025 19:19:31 +0000 (0:00:00.359) 0:06:39.403 ********* 2025-05-28 19:26:34.245635 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245639 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245643 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245650 | orchestrator | 2025-05-28 19:26:34.245654 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.245658 | orchestrator | Wednesday 28 May 2025 19:19:31 +0000 (0:00:00.386) 0:06:39.790 ********* 2025-05-28 19:26:34.245663 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.245667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.245671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.245675 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245679 | orchestrator | 2025-05-28 19:26:34.245683 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.245688 | orchestrator | Wednesday 28 May 2025 19:19:31 +0000 (0:00:00.465) 0:06:40.255 ********* 2025-05-28 19:26:34.245692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.245696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.245700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.245704 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245708 | orchestrator | 2025-05-28 19:26:34.245712 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.245717 | orchestrator | Wednesday 28 May 2025 19:19:32 +0000 (0:00:00.456) 0:06:40.712 ********* 2025-05-28 19:26:34.245721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.245725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.245729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.245733 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245737 | orchestrator | 2025-05-28 19:26:34.245741 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.245746 | orchestrator | Wednesday 28 May 2025 19:19:33 +0000 (0:00:00.770) 0:06:41.482 ********* 2025-05-28 19:26:34.245750 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245754 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245758 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245762 | orchestrator | 2025-05-28 19:26:34.245766 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.245771 | orchestrator | Wednesday 28 May 2025 19:19:33 +0000 (0:00:00.638) 0:06:42.121 ********* 2025-05-28 19:26:34.245775 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.245779 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245783 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.245787 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245791 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.245796 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245800 | orchestrator | 2025-05-28 19:26:34.245804 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.245808 | orchestrator | Wednesday 28 May 2025 19:19:34 +0000 (0:00:00.544) 0:06:42.666 ********* 2025-05-28 19:26:34.245812 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245816 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245821 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245825 | orchestrator | 2025-05-28 19:26:34.245829 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.245833 | orchestrator | Wednesday 28 May 2025 19:19:34 +0000 (0:00:00.373) 0:06:43.040 ********* 2025-05-28 19:26:34.245837 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245841 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245845 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245849 | orchestrator | 2025-05-28 19:26:34.245854 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.245858 | orchestrator | Wednesday 28 May 2025 19:19:35 +0000 (0:00:00.356) 0:06:43.396 ********* 2025-05-28 19:26:34.245864 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.245868 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245873 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.245877 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245892 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.245897 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245901 | orchestrator | 2025-05-28 19:26:34.245905 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.245909 | orchestrator | Wednesday 28 May 2025 19:19:35 +0000 (0:00:00.831) 0:06:44.228 ********* 2025-05-28 19:26:34.245914 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245918 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245922 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245926 | orchestrator | 2025-05-28 19:26:34.245930 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.245934 | orchestrator | Wednesday 28 May 2025 19:19:36 +0000 (0:00:00.388) 0:06:44.616 ********* 2025-05-28 19:26:34.245939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.245943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.245947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.245951 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.245955 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-28 19:26:34.245959 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-28 19:26:34.245963 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-28 19:26:34.245968 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.245980 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-28 19:26:34.245985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-28 19:26:34.245989 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-28 19:26:34.245993 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.245998 | orchestrator | 2025-05-28 19:26:34.246002 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.246006 | orchestrator | Wednesday 28 May 2025 19:19:36 +0000 (0:00:00.600) 0:06:45.216 ********* 2025-05-28 19:26:34.246010 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246025 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246030 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246034 | orchestrator | 2025-05-28 19:26:34.246038 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-28 19:26:34.246043 | orchestrator | Wednesday 28 May 2025 19:19:37 +0000 (0:00:00.938) 0:06:46.155 ********* 2025-05-28 19:26:34.246047 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246051 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246055 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246059 | orchestrator | 2025-05-28 19:26:34.246063 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-28 19:26:34.246068 | orchestrator | Wednesday 28 May 2025 19:19:38 +0000 (0:00:00.561) 0:06:46.716 ********* 2025-05-28 19:26:34.246072 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246076 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246080 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246084 | orchestrator | 2025-05-28 19:26:34.246089 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-28 19:26:34.246093 | orchestrator | Wednesday 28 May 2025 19:19:39 +0000 (0:00:00.947) 0:06:47.664 ********* 2025-05-28 19:26:34.246097 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246101 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246105 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246109 | orchestrator | 2025-05-28 19:26:34.246114 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-28 19:26:34.246121 | orchestrator | Wednesday 28 May 2025 19:19:40 +0000 (0:00:00.699) 0:06:48.364 ********* 2025-05-28 19:26:34.246125 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:26:34.246129 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:26:34.246133 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:26:34.246137 | orchestrator | 2025-05-28 19:26:34.246142 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-28 19:26:34.246146 | orchestrator | Wednesday 28 May 2025 19:19:41 +0000 (0:00:01.081) 0:06:49.445 ********* 2025-05-28 19:26:34.246150 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.246154 | orchestrator | 2025-05-28 19:26:34.246158 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-28 19:26:34.246163 | orchestrator | Wednesday 28 May 2025 19:19:41 +0000 (0:00:00.584) 0:06:50.030 ********* 2025-05-28 19:26:34.246167 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246171 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246175 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246179 | orchestrator | 2025-05-28 19:26:34.246183 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-28 19:26:34.246188 | orchestrator | Wednesday 28 May 2025 19:19:42 +0000 (0:00:00.723) 0:06:50.754 ********* 2025-05-28 19:26:34.246192 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246196 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246200 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246204 | orchestrator | 2025-05-28 19:26:34.246208 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-28 19:26:34.246213 | orchestrator | Wednesday 28 May 2025 19:19:43 +0000 (0:00:00.628) 0:06:51.383 ********* 2025-05-28 19:26:34.246217 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 19:26:34.246221 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 19:26:34.246225 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 19:26:34.246229 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-28 19:26:34.246234 | orchestrator | 2025-05-28 19:26:34.246238 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-28 19:26:34.246242 | orchestrator | Wednesday 28 May 2025 19:19:51 +0000 (0:00:08.449) 0:06:59.832 ********* 2025-05-28 19:26:34.246258 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.246263 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.246267 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.246272 | orchestrator | 2025-05-28 19:26:34.246276 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-28 19:26:34.246280 | orchestrator | Wednesday 28 May 2025 19:19:52 +0000 (0:00:00.555) 0:07:00.388 ********* 2025-05-28 19:26:34.246284 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-28 19:26:34.246288 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 19:26:34.246292 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 19:26:34.246297 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-28 19:26:34.246301 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:26:34.246305 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:26:34.246309 | orchestrator | 2025-05-28 19:26:34.246313 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-28 19:26:34.246317 | orchestrator | Wednesday 28 May 2025 19:19:53 +0000 (0:00:01.761) 0:07:02.149 ********* 2025-05-28 19:26:34.246322 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-28 19:26:34.246326 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 19:26:34.246330 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 19:26:34.246339 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 19:26:34.246343 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-28 19:26:34.246347 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-28 19:26:34.246351 | orchestrator | 2025-05-28 19:26:34.246355 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-28 19:26:34.246360 | orchestrator | Wednesday 28 May 2025 19:19:55 +0000 (0:00:01.272) 0:07:03.422 ********* 2025-05-28 19:26:34.246364 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.246368 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.246372 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.246376 | orchestrator | 2025-05-28 19:26:34.246380 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-28 19:26:34.246384 | orchestrator | Wednesday 28 May 2025 19:19:55 +0000 (0:00:00.697) 0:07:04.119 ********* 2025-05-28 19:26:34.246389 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246393 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246397 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246401 | orchestrator | 2025-05-28 19:26:34.246405 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-28 19:26:34.246409 | orchestrator | Wednesday 28 May 2025 19:19:56 +0000 (0:00:00.629) 0:07:04.749 ********* 2025-05-28 19:26:34.246413 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246417 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246421 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246426 | orchestrator | 2025-05-28 19:26:34.246430 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-28 19:26:34.246434 | orchestrator | Wednesday 28 May 2025 19:19:56 +0000 (0:00:00.355) 0:07:05.104 ********* 2025-05-28 19:26:34.246438 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.246442 | orchestrator | 2025-05-28 19:26:34.246446 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-28 19:26:34.246450 | orchestrator | Wednesday 28 May 2025 19:19:57 +0000 (0:00:00.548) 0:07:05.652 ********* 2025-05-28 19:26:34.246455 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246459 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246463 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246467 | orchestrator | 2025-05-28 19:26:34.246471 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-28 19:26:34.246475 | orchestrator | Wednesday 28 May 2025 19:19:57 +0000 (0:00:00.641) 0:07:06.294 ********* 2025-05-28 19:26:34.246479 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246483 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246488 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.246492 | orchestrator | 2025-05-28 19:26:34.246496 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-28 19:26:34.246500 | orchestrator | Wednesday 28 May 2025 19:19:58 +0000 (0:00:00.362) 0:07:06.656 ********* 2025-05-28 19:26:34.246504 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.246508 | orchestrator | 2025-05-28 19:26:34.246512 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-28 19:26:34.246516 | orchestrator | Wednesday 28 May 2025 19:19:58 +0000 (0:00:00.597) 0:07:07.254 ********* 2025-05-28 19:26:34.246521 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246525 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246529 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246533 | orchestrator | 2025-05-28 19:26:34.246537 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-28 19:26:34.246541 | orchestrator | Wednesday 28 May 2025 19:20:00 +0000 (0:00:01.643) 0:07:08.898 ********* 2025-05-28 19:26:34.246545 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246549 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246556 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246560 | orchestrator | 2025-05-28 19:26:34.246564 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-28 19:26:34.246568 | orchestrator | Wednesday 28 May 2025 19:20:01 +0000 (0:00:01.213) 0:07:10.112 ********* 2025-05-28 19:26:34.246572 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246576 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246581 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246585 | orchestrator | 2025-05-28 19:26:34.246589 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-28 19:26:34.246593 | orchestrator | Wednesday 28 May 2025 19:20:03 +0000 (0:00:02.019) 0:07:12.131 ********* 2025-05-28 19:26:34.246597 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246601 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246616 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246621 | orchestrator | 2025-05-28 19:26:34.246625 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-28 19:26:34.246630 | orchestrator | Wednesday 28 May 2025 19:20:06 +0000 (0:00:02.359) 0:07:14.491 ********* 2025-05-28 19:26:34.246634 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246638 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.246642 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-28 19:26:34.246646 | orchestrator | 2025-05-28 19:26:34.246650 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-28 19:26:34.246655 | orchestrator | Wednesday 28 May 2025 19:20:06 +0000 (0:00:00.620) 0:07:15.111 ********* 2025-05-28 19:26:34.246659 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-28 19:26:34.246663 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-28 19:26:34.246667 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:26:34.246672 | orchestrator | 2025-05-28 19:26:34.246676 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-28 19:26:34.246680 | orchestrator | Wednesday 28 May 2025 19:20:20 +0000 (0:00:13.560) 0:07:28.672 ********* 2025-05-28 19:26:34.246687 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:26:34.246692 | orchestrator | 2025-05-28 19:26:34.246696 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-28 19:26:34.246700 | orchestrator | Wednesday 28 May 2025 19:20:22 +0000 (0:00:01.758) 0:07:30.430 ********* 2025-05-28 19:26:34.246704 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.246708 | orchestrator | 2025-05-28 19:26:34.246712 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-28 19:26:34.246717 | orchestrator | Wednesday 28 May 2025 19:20:22 +0000 (0:00:00.490) 0:07:30.921 ********* 2025-05-28 19:26:34.246721 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.246725 | orchestrator | 2025-05-28 19:26:34.246729 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-28 19:26:34.246733 | orchestrator | Wednesday 28 May 2025 19:20:22 +0000 (0:00:00.298) 0:07:31.219 ********* 2025-05-28 19:26:34.246737 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-28 19:26:34.246741 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-28 19:26:34.246746 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-28 19:26:34.246750 | orchestrator | 2025-05-28 19:26:34.246754 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-28 19:26:34.246758 | orchestrator | Wednesday 28 May 2025 19:20:29 +0000 (0:00:06.830) 0:07:38.049 ********* 2025-05-28 19:26:34.246762 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-28 19:26:34.246766 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-28 19:26:34.246773 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-28 19:26:34.246778 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-28 19:26:34.246782 | orchestrator | 2025-05-28 19:26:34.246786 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-28 19:26:34.246790 | orchestrator | Wednesday 28 May 2025 19:20:35 +0000 (0:00:05.749) 0:07:43.799 ********* 2025-05-28 19:26:34.246794 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246798 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246803 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246807 | orchestrator | 2025-05-28 19:26:34.246811 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-28 19:26:34.246815 | orchestrator | Wednesday 28 May 2025 19:20:36 +0000 (0:00:01.016) 0:07:44.815 ********* 2025-05-28 19:26:34.246819 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:26:34.246823 | orchestrator | 2025-05-28 19:26:34.246827 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-28 19:26:34.246831 | orchestrator | Wednesday 28 May 2025 19:20:37 +0000 (0:00:00.581) 0:07:45.397 ********* 2025-05-28 19:26:34.246836 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.246840 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.246844 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.246848 | orchestrator | 2025-05-28 19:26:34.246852 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-28 19:26:34.246856 | orchestrator | Wednesday 28 May 2025 19:20:37 +0000 (0:00:00.344) 0:07:45.741 ********* 2025-05-28 19:26:34.246860 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246865 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246869 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246873 | orchestrator | 2025-05-28 19:26:34.246877 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-28 19:26:34.246881 | orchestrator | Wednesday 28 May 2025 19:20:38 +0000 (0:00:01.291) 0:07:47.033 ********* 2025-05-28 19:26:34.246885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:26:34.246890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:26:34.246894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:26:34.246898 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.246902 | orchestrator | 2025-05-28 19:26:34.246906 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-28 19:26:34.246910 | orchestrator | Wednesday 28 May 2025 19:20:39 +0000 (0:00:00.698) 0:07:47.731 ********* 2025-05-28 19:26:34.246914 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.246918 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.246923 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.246927 | orchestrator | 2025-05-28 19:26:34.246942 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.246947 | orchestrator | Wednesday 28 May 2025 19:20:39 +0000 (0:00:00.395) 0:07:48.127 ********* 2025-05-28 19:26:34.246951 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.246955 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.246959 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.246963 | orchestrator | 2025-05-28 19:26:34.246968 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-28 19:26:34.246991 | orchestrator | 2025-05-28 19:26:34.246996 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-28 19:26:34.247000 | orchestrator | Wednesday 28 May 2025 19:20:42 +0000 (0:00:02.290) 0:07:50.417 ********* 2025-05-28 19:26:34.247004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.247009 | orchestrator | 2025-05-28 19:26:34.247013 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-28 19:26:34.247020 | orchestrator | Wednesday 28 May 2025 19:20:42 +0000 (0:00:00.555) 0:07:50.973 ********* 2025-05-28 19:26:34.247024 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247029 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247033 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247037 | orchestrator | 2025-05-28 19:26:34.247041 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-28 19:26:34.247048 | orchestrator | Wednesday 28 May 2025 19:20:42 +0000 (0:00:00.325) 0:07:51.298 ********* 2025-05-28 19:26:34.247052 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247056 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247060 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247065 | orchestrator | 2025-05-28 19:26:34.247069 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-28 19:26:34.247073 | orchestrator | Wednesday 28 May 2025 19:20:43 +0000 (0:00:00.986) 0:07:52.285 ********* 2025-05-28 19:26:34.247077 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247081 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247085 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247090 | orchestrator | 2025-05-28 19:26:34.247094 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-28 19:26:34.247098 | orchestrator | Wednesday 28 May 2025 19:20:44 +0000 (0:00:00.774) 0:07:53.060 ********* 2025-05-28 19:26:34.247102 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247106 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247111 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247115 | orchestrator | 2025-05-28 19:26:34.247119 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-28 19:26:34.247123 | orchestrator | Wednesday 28 May 2025 19:20:45 +0000 (0:00:00.789) 0:07:53.850 ********* 2025-05-28 19:26:34.247127 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247132 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247136 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247140 | orchestrator | 2025-05-28 19:26:34.247144 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-28 19:26:34.247148 | orchestrator | Wednesday 28 May 2025 19:20:45 +0000 (0:00:00.357) 0:07:54.207 ********* 2025-05-28 19:26:34.247153 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247157 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247161 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247165 | orchestrator | 2025-05-28 19:26:34.247169 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-28 19:26:34.247173 | orchestrator | Wednesday 28 May 2025 19:20:46 +0000 (0:00:00.656) 0:07:54.864 ********* 2025-05-28 19:26:34.247178 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247182 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247186 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247190 | orchestrator | 2025-05-28 19:26:34.247194 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-28 19:26:34.247199 | orchestrator | Wednesday 28 May 2025 19:20:46 +0000 (0:00:00.433) 0:07:55.298 ********* 2025-05-28 19:26:34.247203 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247207 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247211 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247215 | orchestrator | 2025-05-28 19:26:34.247220 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-28 19:26:34.247224 | orchestrator | Wednesday 28 May 2025 19:20:47 +0000 (0:00:00.381) 0:07:55.680 ********* 2025-05-28 19:26:34.247228 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247232 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247236 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247241 | orchestrator | 2025-05-28 19:26:34.247245 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-28 19:26:34.247249 | orchestrator | Wednesday 28 May 2025 19:20:47 +0000 (0:00:00.327) 0:07:56.007 ********* 2025-05-28 19:26:34.247256 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247260 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247264 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247268 | orchestrator | 2025-05-28 19:26:34.247272 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-28 19:26:34.247277 | orchestrator | Wednesday 28 May 2025 19:20:48 +0000 (0:00:00.619) 0:07:56.627 ********* 2025-05-28 19:26:34.247281 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247285 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247289 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247293 | orchestrator | 2025-05-28 19:26:34.247297 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-28 19:26:34.247302 | orchestrator | Wednesday 28 May 2025 19:20:49 +0000 (0:00:00.812) 0:07:57.439 ********* 2025-05-28 19:26:34.247306 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247310 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247314 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247318 | orchestrator | 2025-05-28 19:26:34.247322 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-28 19:26:34.247327 | orchestrator | Wednesday 28 May 2025 19:20:49 +0000 (0:00:00.319) 0:07:57.758 ********* 2025-05-28 19:26:34.247331 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247348 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247353 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247357 | orchestrator | 2025-05-28 19:26:34.247361 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-28 19:26:34.247365 | orchestrator | Wednesday 28 May 2025 19:20:49 +0000 (0:00:00.356) 0:07:58.115 ********* 2025-05-28 19:26:34.247370 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247374 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247378 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247382 | orchestrator | 2025-05-28 19:26:34.247386 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-28 19:26:34.247391 | orchestrator | Wednesday 28 May 2025 19:20:50 +0000 (0:00:00.745) 0:07:58.861 ********* 2025-05-28 19:26:34.247395 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247399 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247403 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247407 | orchestrator | 2025-05-28 19:26:34.247412 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-28 19:26:34.247415 | orchestrator | Wednesday 28 May 2025 19:20:50 +0000 (0:00:00.362) 0:07:59.224 ********* 2025-05-28 19:26:34.247419 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247423 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247427 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247431 | orchestrator | 2025-05-28 19:26:34.247434 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-28 19:26:34.247440 | orchestrator | Wednesday 28 May 2025 19:20:51 +0000 (0:00:00.400) 0:07:59.624 ********* 2025-05-28 19:26:34.247444 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247448 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247452 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247456 | orchestrator | 2025-05-28 19:26:34.247459 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-28 19:26:34.247463 | orchestrator | Wednesday 28 May 2025 19:20:51 +0000 (0:00:00.373) 0:07:59.997 ********* 2025-05-28 19:26:34.247467 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247471 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247475 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247478 | orchestrator | 2025-05-28 19:26:34.247482 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-28 19:26:34.247486 | orchestrator | Wednesday 28 May 2025 19:20:52 +0000 (0:00:00.675) 0:08:00.673 ********* 2025-05-28 19:26:34.247490 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247496 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247500 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247504 | orchestrator | 2025-05-28 19:26:34.247508 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-28 19:26:34.247512 | orchestrator | Wednesday 28 May 2025 19:20:52 +0000 (0:00:00.350) 0:08:01.023 ********* 2025-05-28 19:26:34.247516 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.247519 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.247523 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.247527 | orchestrator | 2025-05-28 19:26:34.247531 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.247535 | orchestrator | Wednesday 28 May 2025 19:20:53 +0000 (0:00:00.386) 0:08:01.410 ********* 2025-05-28 19:26:34.247538 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247542 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247546 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247550 | orchestrator | 2025-05-28 19:26:34.247554 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.247557 | orchestrator | Wednesday 28 May 2025 19:20:53 +0000 (0:00:00.365) 0:08:01.776 ********* 2025-05-28 19:26:34.247561 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247565 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247569 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247573 | orchestrator | 2025-05-28 19:26:34.247576 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.247580 | orchestrator | Wednesday 28 May 2025 19:20:54 +0000 (0:00:00.639) 0:08:02.416 ********* 2025-05-28 19:26:34.247584 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247588 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247592 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247596 | orchestrator | 2025-05-28 19:26:34.247599 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.247603 | orchestrator | Wednesday 28 May 2025 19:20:54 +0000 (0:00:00.441) 0:08:02.857 ********* 2025-05-28 19:26:34.247607 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247611 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247614 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247618 | orchestrator | 2025-05-28 19:26:34.247622 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.247626 | orchestrator | Wednesday 28 May 2025 19:20:54 +0000 (0:00:00.379) 0:08:03.236 ********* 2025-05-28 19:26:34.247630 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247633 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247637 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247641 | orchestrator | 2025-05-28 19:26:34.247645 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.247649 | orchestrator | Wednesday 28 May 2025 19:20:55 +0000 (0:00:00.379) 0:08:03.615 ********* 2025-05-28 19:26:34.247652 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247656 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247660 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247664 | orchestrator | 2025-05-28 19:26:34.247668 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.247671 | orchestrator | Wednesday 28 May 2025 19:20:55 +0000 (0:00:00.634) 0:08:04.250 ********* 2025-05-28 19:26:34.247675 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247679 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247683 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247686 | orchestrator | 2025-05-28 19:26:34.247690 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.247694 | orchestrator | Wednesday 28 May 2025 19:20:56 +0000 (0:00:00.348) 0:08:04.599 ********* 2025-05-28 19:26:34.247698 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247712 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247719 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247723 | orchestrator | 2025-05-28 19:26:34.247727 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.247731 | orchestrator | Wednesday 28 May 2025 19:20:56 +0000 (0:00:00.331) 0:08:04.930 ********* 2025-05-28 19:26:34.247735 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247739 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247742 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247746 | orchestrator | 2025-05-28 19:26:34.247750 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.247754 | orchestrator | Wednesday 28 May 2025 19:20:56 +0000 (0:00:00.330) 0:08:05.261 ********* 2025-05-28 19:26:34.247758 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247761 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247765 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247769 | orchestrator | 2025-05-28 19:26:34.247773 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.247777 | orchestrator | Wednesday 28 May 2025 19:20:57 +0000 (0:00:00.651) 0:08:05.912 ********* 2025-05-28 19:26:34.247780 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247784 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247788 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247792 | orchestrator | 2025-05-28 19:26:34.247795 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.247799 | orchestrator | Wednesday 28 May 2025 19:20:57 +0000 (0:00:00.353) 0:08:06.266 ********* 2025-05-28 19:26:34.247803 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247807 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247811 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247815 | orchestrator | 2025-05-28 19:26:34.247818 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.247822 | orchestrator | Wednesday 28 May 2025 19:20:58 +0000 (0:00:00.333) 0:08:06.600 ********* 2025-05-28 19:26:34.247826 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.247830 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.247834 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247837 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.247841 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.247845 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247849 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.247852 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.247856 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247860 | orchestrator | 2025-05-28 19:26:34.247864 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.247868 | orchestrator | Wednesday 28 May 2025 19:20:58 +0000 (0:00:00.395) 0:08:06.995 ********* 2025-05-28 19:26:34.247871 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-28 19:26:34.247875 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-28 19:26:34.247879 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247883 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-28 19:26:34.247887 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-28 19:26:34.247890 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247894 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-28 19:26:34.247898 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-28 19:26:34.247902 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247905 | orchestrator | 2025-05-28 19:26:34.247909 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.247915 | orchestrator | Wednesday 28 May 2025 19:20:59 +0000 (0:00:00.700) 0:08:07.696 ********* 2025-05-28 19:26:34.247919 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247923 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247927 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247930 | orchestrator | 2025-05-28 19:26:34.247934 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.247938 | orchestrator | Wednesday 28 May 2025 19:20:59 +0000 (0:00:00.384) 0:08:08.080 ********* 2025-05-28 19:26:34.247942 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247945 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247949 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247953 | orchestrator | 2025-05-28 19:26:34.247957 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.247961 | orchestrator | Wednesday 28 May 2025 19:21:00 +0000 (0:00:00.411) 0:08:08.491 ********* 2025-05-28 19:26:34.247965 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247968 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.247980 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.247984 | orchestrator | 2025-05-28 19:26:34.247988 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.247992 | orchestrator | Wednesday 28 May 2025 19:21:00 +0000 (0:00:00.431) 0:08:08.923 ********* 2025-05-28 19:26:34.247996 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.247999 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248003 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248007 | orchestrator | 2025-05-28 19:26:34.248011 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.248015 | orchestrator | Wednesday 28 May 2025 19:21:01 +0000 (0:00:00.623) 0:08:09.547 ********* 2025-05-28 19:26:34.248046 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248054 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248058 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248062 | orchestrator | 2025-05-28 19:26:34.248066 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.248083 | orchestrator | Wednesday 28 May 2025 19:21:01 +0000 (0:00:00.327) 0:08:09.874 ********* 2025-05-28 19:26:34.248087 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248091 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248095 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248099 | orchestrator | 2025-05-28 19:26:34.248103 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.248106 | orchestrator | Wednesday 28 May 2025 19:21:01 +0000 (0:00:00.414) 0:08:10.289 ********* 2025-05-28 19:26:34.248110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.248114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.248118 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.248122 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248125 | orchestrator | 2025-05-28 19:26:34.248129 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.248133 | orchestrator | Wednesday 28 May 2025 19:21:02 +0000 (0:00:00.505) 0:08:10.795 ********* 2025-05-28 19:26:34.248137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.248141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.248144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.248148 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248152 | orchestrator | 2025-05-28 19:26:34.248158 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.248162 | orchestrator | Wednesday 28 May 2025 19:21:02 +0000 (0:00:00.397) 0:08:11.193 ********* 2025-05-28 19:26:34.248165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.248174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.248178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.248181 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248185 | orchestrator | 2025-05-28 19:26:34.248189 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.248193 | orchestrator | Wednesday 28 May 2025 19:21:03 +0000 (0:00:00.699) 0:08:11.892 ********* 2025-05-28 19:26:34.248196 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248200 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248204 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248208 | orchestrator | 2025-05-28 19:26:34.248211 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.248215 | orchestrator | Wednesday 28 May 2025 19:21:04 +0000 (0:00:00.620) 0:08:12.513 ********* 2025-05-28 19:26:34.248219 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.248223 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248226 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.248230 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248234 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.248238 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248242 | orchestrator | 2025-05-28 19:26:34.248245 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.248249 | orchestrator | Wednesday 28 May 2025 19:21:04 +0000 (0:00:00.593) 0:08:13.106 ********* 2025-05-28 19:26:34.248253 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248257 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248261 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248264 | orchestrator | 2025-05-28 19:26:34.248268 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.248272 | orchestrator | Wednesday 28 May 2025 19:21:05 +0000 (0:00:00.355) 0:08:13.461 ********* 2025-05-28 19:26:34.248276 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248279 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248283 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248287 | orchestrator | 2025-05-28 19:26:34.248291 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.248294 | orchestrator | Wednesday 28 May 2025 19:21:05 +0000 (0:00:00.349) 0:08:13.811 ********* 2025-05-28 19:26:34.248298 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.248302 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248306 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.248309 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248313 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.248317 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248321 | orchestrator | 2025-05-28 19:26:34.248324 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.248328 | orchestrator | Wednesday 28 May 2025 19:21:06 +0000 (0:00:00.947) 0:08:14.758 ********* 2025-05-28 19:26:34.248332 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.248336 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248340 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.248343 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248347 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.248351 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248355 | orchestrator | 2025-05-28 19:26:34.248359 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.248365 | orchestrator | Wednesday 28 May 2025 19:21:06 +0000 (0:00:00.388) 0:08:15.146 ********* 2025-05-28 19:26:34.248369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.248372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.248376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.248390 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:26:34.248395 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:26:34.248399 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:26:34.248403 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248406 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:26:34.248414 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:26:34.248418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:26:34.248421 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248425 | orchestrator | 2025-05-28 19:26:34.248429 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.248433 | orchestrator | Wednesday 28 May 2025 19:21:07 +0000 (0:00:00.649) 0:08:15.796 ********* 2025-05-28 19:26:34.248437 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248440 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248444 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248448 | orchestrator | 2025-05-28 19:26:34.248452 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-28 19:26:34.248455 | orchestrator | Wednesday 28 May 2025 19:21:08 +0000 (0:00:00.965) 0:08:16.762 ********* 2025-05-28 19:26:34.248459 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.248465 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248468 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.248472 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248476 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.248480 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248484 | orchestrator | 2025-05-28 19:26:34.248487 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-28 19:26:34.248491 | orchestrator | Wednesday 28 May 2025 19:21:09 +0000 (0:00:00.603) 0:08:17.365 ********* 2025-05-28 19:26:34.248495 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248499 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248502 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248506 | orchestrator | 2025-05-28 19:26:34.248510 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-28 19:26:34.248514 | orchestrator | Wednesday 28 May 2025 19:21:09 +0000 (0:00:00.867) 0:08:18.233 ********* 2025-05-28 19:26:34.248517 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248521 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248525 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248529 | orchestrator | 2025-05-28 19:26:34.248533 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-28 19:26:34.248536 | orchestrator | Wednesday 28 May 2025 19:21:10 +0000 (0:00:00.583) 0:08:18.816 ********* 2025-05-28 19:26:34.248540 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.248544 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.248548 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.248551 | orchestrator | 2025-05-28 19:26:34.248555 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-28 19:26:34.248559 | orchestrator | Wednesday 28 May 2025 19:21:11 +0000 (0:00:00.639) 0:08:19.455 ********* 2025-05-28 19:26:34.248563 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 19:26:34.248566 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:26:34.248572 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:26:34.248576 | orchestrator | 2025-05-28 19:26:34.248580 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-28 19:26:34.248584 | orchestrator | Wednesday 28 May 2025 19:21:11 +0000 (0:00:00.751) 0:08:20.207 ********* 2025-05-28 19:26:34.248587 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.248591 | orchestrator | 2025-05-28 19:26:34.248595 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-28 19:26:34.248599 | orchestrator | Wednesday 28 May 2025 19:21:12 +0000 (0:00:00.564) 0:08:20.772 ********* 2025-05-28 19:26:34.248602 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248606 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248610 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248614 | orchestrator | 2025-05-28 19:26:34.248618 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-28 19:26:34.248621 | orchestrator | Wednesday 28 May 2025 19:21:13 +0000 (0:00:00.598) 0:08:21.370 ********* 2025-05-28 19:26:34.248625 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248629 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248633 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248636 | orchestrator | 2025-05-28 19:26:34.248640 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-28 19:26:34.248644 | orchestrator | Wednesday 28 May 2025 19:21:13 +0000 (0:00:00.329) 0:08:21.700 ********* 2025-05-28 19:26:34.248648 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248651 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248655 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248659 | orchestrator | 2025-05-28 19:26:34.248663 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-28 19:26:34.248667 | orchestrator | Wednesday 28 May 2025 19:21:13 +0000 (0:00:00.327) 0:08:22.027 ********* 2025-05-28 19:26:34.248670 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248674 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248678 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248682 | orchestrator | 2025-05-28 19:26:34.248685 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-28 19:26:34.248689 | orchestrator | Wednesday 28 May 2025 19:21:14 +0000 (0:00:00.317) 0:08:22.344 ********* 2025-05-28 19:26:34.248693 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.248697 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.248700 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.248704 | orchestrator | 2025-05-28 19:26:34.248718 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-28 19:26:34.248723 | orchestrator | Wednesday 28 May 2025 19:21:14 +0000 (0:00:00.804) 0:08:23.149 ********* 2025-05-28 19:26:34.248727 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.248731 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.248734 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.248738 | orchestrator | 2025-05-28 19:26:34.248742 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-28 19:26:34.248746 | orchestrator | Wednesday 28 May 2025 19:21:15 +0000 (0:00:00.363) 0:08:23.512 ********* 2025-05-28 19:26:34.248750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-28 19:26:34.248753 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-28 19:26:34.248757 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-28 19:26:34.248761 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-28 19:26:34.248765 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-28 19:26:34.248773 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-28 19:26:34.248777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-28 19:26:34.248781 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-28 19:26:34.248785 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-28 19:26:34.248788 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-28 19:26:34.248792 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-28 19:26:34.248796 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-28 19:26:34.248800 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-28 19:26:34.248804 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-28 19:26:34.248807 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-28 19:26:34.248811 | orchestrator | 2025-05-28 19:26:34.248815 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-28 19:26:34.248819 | orchestrator | Wednesday 28 May 2025 19:21:17 +0000 (0:00:02.371) 0:08:25.883 ********* 2025-05-28 19:26:34.248822 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.248826 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.248830 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.248834 | orchestrator | 2025-05-28 19:26:34.248838 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-28 19:26:34.248842 | orchestrator | Wednesday 28 May 2025 19:21:17 +0000 (0:00:00.328) 0:08:26.212 ********* 2025-05-28 19:26:34.248845 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.248849 | orchestrator | 2025-05-28 19:26:34.248853 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-28 19:26:34.248857 | orchestrator | Wednesday 28 May 2025 19:21:18 +0000 (0:00:00.798) 0:08:27.010 ********* 2025-05-28 19:26:34.248860 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-28 19:26:34.248864 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-28 19:26:34.248868 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-28 19:26:34.248872 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-28 19:26:34.248876 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-28 19:26:34.248879 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-28 19:26:34.248883 | orchestrator | 2025-05-28 19:26:34.248887 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-28 19:26:34.248891 | orchestrator | Wednesday 28 May 2025 19:21:19 +0000 (0:00:01.058) 0:08:28.069 ********* 2025-05-28 19:26:34.248894 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:26:34.248898 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.248902 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 19:26:34.248906 | orchestrator | 2025-05-28 19:26:34.248910 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-28 19:26:34.248913 | orchestrator | Wednesday 28 May 2025 19:21:21 +0000 (0:00:01.738) 0:08:29.808 ********* 2025-05-28 19:26:34.248917 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 19:26:34.248921 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.248925 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.248929 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 19:26:34.248932 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.248939 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.248943 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 19:26:34.248946 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.248950 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.248954 | orchestrator | 2025-05-28 19:26:34.248958 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-28 19:26:34.248962 | orchestrator | Wednesday 28 May 2025 19:21:23 +0000 (0:00:01.667) 0:08:31.475 ********* 2025-05-28 19:26:34.248984 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:26:34.248989 | orchestrator | 2025-05-28 19:26:34.248993 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-28 19:26:34.248997 | orchestrator | Wednesday 28 May 2025 19:21:25 +0000 (0:00:02.664) 0:08:34.140 ********* 2025-05-28 19:26:34.249001 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.249004 | orchestrator | 2025-05-28 19:26:34.249008 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-28 19:26:34.249012 | orchestrator | Wednesday 28 May 2025 19:21:26 +0000 (0:00:00.843) 0:08:34.984 ********* 2025-05-28 19:26:34.249016 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249020 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249024 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249028 | orchestrator | 2025-05-28 19:26:34.249031 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-28 19:26:34.249035 | orchestrator | Wednesday 28 May 2025 19:21:26 +0000 (0:00:00.347) 0:08:35.332 ********* 2025-05-28 19:26:34.249039 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249043 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249047 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249050 | orchestrator | 2025-05-28 19:26:34.249056 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-28 19:26:34.249060 | orchestrator | Wednesday 28 May 2025 19:21:27 +0000 (0:00:00.319) 0:08:35.651 ********* 2025-05-28 19:26:34.249064 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249068 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249072 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249076 | orchestrator | 2025-05-28 19:26:34.249079 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-28 19:26:34.249083 | orchestrator | Wednesday 28 May 2025 19:21:27 +0000 (0:00:00.321) 0:08:35.973 ********* 2025-05-28 19:26:34.249087 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.249091 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.249095 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.249099 | orchestrator | 2025-05-28 19:26:34.249102 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-28 19:26:34.249106 | orchestrator | Wednesday 28 May 2025 19:21:28 +0000 (0:00:00.771) 0:08:36.745 ********* 2025-05-28 19:26:34.249110 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.249114 | orchestrator | 2025-05-28 19:26:34.249118 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-28 19:26:34.249121 | orchestrator | Wednesday 28 May 2025 19:21:29 +0000 (0:00:00.618) 0:08:37.363 ********* 2025-05-28 19:26:34.249125 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3ed7399e-dc97-5c28-9f68-879666a39403', 'data_vg': 'ceph-3ed7399e-dc97-5c28-9f68-879666a39403'}) 2025-05-28 19:26:34.249130 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-79c077cd-dd98-5cad-a8fa-86d8aa897eb3', 'data_vg': 'ceph-79c077cd-dd98-5cad-a8fa-86d8aa897eb3'}) 2025-05-28 19:26:34.249134 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5db078c0-6128-52c2-9305-54ff671eda75', 'data_vg': 'ceph-5db078c0-6128-52c2-9305-54ff671eda75'}) 2025-05-28 19:26:34.249140 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-117a45ef-4e6c-5b76-bea4-f0c196d92690', 'data_vg': 'ceph-117a45ef-4e6c-5b76-bea4-f0c196d92690'}) 2025-05-28 19:26:34.249144 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0344b063-3cec-5ade-bfbf-9241287811af', 'data_vg': 'ceph-0344b063-3cec-5ade-bfbf-9241287811af'}) 2025-05-28 19:26:34.249148 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fda1a2ce-c0e6-5c69-aaa5-109883ddc076', 'data_vg': 'ceph-fda1a2ce-c0e6-5c69-aaa5-109883ddc076'}) 2025-05-28 19:26:34.249152 | orchestrator | 2025-05-28 19:26:34.249156 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-28 19:26:34.249159 | orchestrator | Wednesday 28 May 2025 19:22:08 +0000 (0:00:39.029) 0:09:16.393 ********* 2025-05-28 19:26:34.249163 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249167 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249171 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249175 | orchestrator | 2025-05-28 19:26:34.249178 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-28 19:26:34.249182 | orchestrator | Wednesday 28 May 2025 19:22:08 +0000 (0:00:00.472) 0:09:16.866 ********* 2025-05-28 19:26:34.249186 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.249190 | orchestrator | 2025-05-28 19:26:34.249194 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-28 19:26:34.249197 | orchestrator | Wednesday 28 May 2025 19:22:09 +0000 (0:00:00.573) 0:09:17.439 ********* 2025-05-28 19:26:34.249201 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.249205 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.249209 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.249213 | orchestrator | 2025-05-28 19:26:34.249216 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-28 19:26:34.249220 | orchestrator | Wednesday 28 May 2025 19:22:09 +0000 (0:00:00.734) 0:09:18.174 ********* 2025-05-28 19:26:34.249224 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.249228 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.249232 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.249235 | orchestrator | 2025-05-28 19:26:34.249249 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-28 19:26:34.249254 | orchestrator | Wednesday 28 May 2025 19:22:11 +0000 (0:00:01.994) 0:09:20.169 ********* 2025-05-28 19:26:34.249257 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.249261 | orchestrator | 2025-05-28 19:26:34.249265 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-28 19:26:34.249269 | orchestrator | Wednesday 28 May 2025 19:22:12 +0000 (0:00:00.552) 0:09:20.721 ********* 2025-05-28 19:26:34.249273 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.249277 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.249280 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.249284 | orchestrator | 2025-05-28 19:26:34.249288 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-28 19:26:34.249292 | orchestrator | Wednesday 28 May 2025 19:22:13 +0000 (0:00:01.468) 0:09:22.190 ********* 2025-05-28 19:26:34.249296 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.249300 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.249303 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.249307 | orchestrator | 2025-05-28 19:26:34.249311 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-28 19:26:34.249315 | orchestrator | Wednesday 28 May 2025 19:22:15 +0000 (0:00:01.199) 0:09:23.389 ********* 2025-05-28 19:26:34.249319 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.249324 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.249328 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.249335 | orchestrator | 2025-05-28 19:26:34.249339 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-28 19:26:34.249343 | orchestrator | Wednesday 28 May 2025 19:22:16 +0000 (0:00:01.762) 0:09:25.151 ********* 2025-05-28 19:26:34.249347 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249351 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249355 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249358 | orchestrator | 2025-05-28 19:26:34.249362 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-28 19:26:34.249366 | orchestrator | Wednesday 28 May 2025 19:22:17 +0000 (0:00:00.344) 0:09:25.496 ********* 2025-05-28 19:26:34.249370 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249374 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249378 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249382 | orchestrator | 2025-05-28 19:26:34.249385 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-28 19:26:34.249389 | orchestrator | Wednesday 28 May 2025 19:22:17 +0000 (0:00:00.604) 0:09:26.100 ********* 2025-05-28 19:26:34.249393 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 19:26:34.249397 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-05-28 19:26:34.249401 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-05-28 19:26:34.249405 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-28 19:26:34.249408 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-28 19:26:34.249412 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-28 19:26:34.249416 | orchestrator | 2025-05-28 19:26:34.249420 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-28 19:26:34.249424 | orchestrator | Wednesday 28 May 2025 19:22:18 +0000 (0:00:01.023) 0:09:27.123 ********* 2025-05-28 19:26:34.249427 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-28 19:26:34.249431 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-28 19:26:34.249435 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-28 19:26:34.249439 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-28 19:26:34.249443 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-28 19:26:34.249446 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-28 19:26:34.249450 | orchestrator | 2025-05-28 19:26:34.249454 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-28 19:26:34.249458 | orchestrator | Wednesday 28 May 2025 19:22:22 +0000 (0:00:03.431) 0:09:30.555 ********* 2025-05-28 19:26:34.249462 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249466 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249469 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:26:34.249473 | orchestrator | 2025-05-28 19:26:34.249477 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-28 19:26:34.249481 | orchestrator | Wednesday 28 May 2025 19:22:25 +0000 (0:00:02.853) 0:09:33.409 ********* 2025-05-28 19:26:34.249485 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249489 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249493 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-28 19:26:34.249496 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:26:34.249500 | orchestrator | 2025-05-28 19:26:34.249504 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-28 19:26:34.249508 | orchestrator | Wednesday 28 May 2025 19:22:37 +0000 (0:00:12.641) 0:09:46.050 ********* 2025-05-28 19:26:34.249512 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249516 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249520 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249523 | orchestrator | 2025-05-28 19:26:34.249527 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-28 19:26:34.249531 | orchestrator | Wednesday 28 May 2025 19:22:38 +0000 (0:00:00.490) 0:09:46.541 ********* 2025-05-28 19:26:34.249537 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249541 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249545 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249549 | orchestrator | 2025-05-28 19:26:34.249553 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-28 19:26:34.249556 | orchestrator | Wednesday 28 May 2025 19:22:39 +0000 (0:00:01.426) 0:09:47.968 ********* 2025-05-28 19:26:34.249560 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.249564 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.249568 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.249572 | orchestrator | 2025-05-28 19:26:34.249575 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-28 19:26:34.249589 | orchestrator | Wednesday 28 May 2025 19:22:40 +0000 (0:00:00.722) 0:09:48.690 ********* 2025-05-28 19:26:34.249593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.249597 | orchestrator | 2025-05-28 19:26:34.249601 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-28 19:26:34.249605 | orchestrator | Wednesday 28 May 2025 19:22:41 +0000 (0:00:00.863) 0:09:49.553 ********* 2025-05-28 19:26:34.249609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.249613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.249617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.249620 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249624 | orchestrator | 2025-05-28 19:26:34.249628 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-28 19:26:34.249632 | orchestrator | Wednesday 28 May 2025 19:22:41 +0000 (0:00:00.415) 0:09:49.969 ********* 2025-05-28 19:26:34.249636 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249640 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249643 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249647 | orchestrator | 2025-05-28 19:26:34.249651 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-28 19:26:34.249658 | orchestrator | Wednesday 28 May 2025 19:22:41 +0000 (0:00:00.327) 0:09:50.297 ********* 2025-05-28 19:26:34.249662 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249666 | orchestrator | 2025-05-28 19:26:34.249670 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-28 19:26:34.249673 | orchestrator | Wednesday 28 May 2025 19:22:42 +0000 (0:00:00.260) 0:09:50.557 ********* 2025-05-28 19:26:34.249677 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249681 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249685 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249689 | orchestrator | 2025-05-28 19:26:34.249693 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-28 19:26:34.249696 | orchestrator | Wednesday 28 May 2025 19:22:42 +0000 (0:00:00.571) 0:09:51.128 ********* 2025-05-28 19:26:34.249700 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249704 | orchestrator | 2025-05-28 19:26:34.249708 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-28 19:26:34.249712 | orchestrator | Wednesday 28 May 2025 19:22:43 +0000 (0:00:00.280) 0:09:51.408 ********* 2025-05-28 19:26:34.249716 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249720 | orchestrator | 2025-05-28 19:26:34.249723 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-28 19:26:34.249727 | orchestrator | Wednesday 28 May 2025 19:22:43 +0000 (0:00:00.253) 0:09:51.662 ********* 2025-05-28 19:26:34.249731 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249735 | orchestrator | 2025-05-28 19:26:34.249739 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-28 19:26:34.249743 | orchestrator | Wednesday 28 May 2025 19:22:43 +0000 (0:00:00.131) 0:09:51.794 ********* 2025-05-28 19:26:34.249749 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249753 | orchestrator | 2025-05-28 19:26:34.249757 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-28 19:26:34.249760 | orchestrator | Wednesday 28 May 2025 19:22:43 +0000 (0:00:00.287) 0:09:52.082 ********* 2025-05-28 19:26:34.249764 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249768 | orchestrator | 2025-05-28 19:26:34.249772 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-28 19:26:34.249776 | orchestrator | Wednesday 28 May 2025 19:22:43 +0000 (0:00:00.243) 0:09:52.325 ********* 2025-05-28 19:26:34.249780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.249784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.249788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.249792 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249795 | orchestrator | 2025-05-28 19:26:34.249799 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-28 19:26:34.249803 | orchestrator | Wednesday 28 May 2025 19:22:44 +0000 (0:00:00.425) 0:09:52.751 ********* 2025-05-28 19:26:34.249807 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249811 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249815 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249818 | orchestrator | 2025-05-28 19:26:34.249822 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-28 19:26:34.249826 | orchestrator | Wednesday 28 May 2025 19:22:44 +0000 (0:00:00.334) 0:09:53.086 ********* 2025-05-28 19:26:34.249830 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249834 | orchestrator | 2025-05-28 19:26:34.249838 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-28 19:26:34.249842 | orchestrator | Wednesday 28 May 2025 19:22:45 +0000 (0:00:00.901) 0:09:53.987 ********* 2025-05-28 19:26:34.249845 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249849 | orchestrator | 2025-05-28 19:26:34.249853 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.249859 | orchestrator | Wednesday 28 May 2025 19:22:45 +0000 (0:00:00.270) 0:09:54.258 ********* 2025-05-28 19:26:34.249863 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.249867 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.249871 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.249875 | orchestrator | 2025-05-28 19:26:34.249878 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-28 19:26:34.249882 | orchestrator | 2025-05-28 19:26:34.249886 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-28 19:26:34.249890 | orchestrator | Wednesday 28 May 2025 19:22:49 +0000 (0:00:03.126) 0:09:57.385 ********* 2025-05-28 19:26:34.249904 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.249909 | orchestrator | 2025-05-28 19:26:34.249913 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-28 19:26:34.249916 | orchestrator | Wednesday 28 May 2025 19:22:50 +0000 (0:00:01.308) 0:09:58.693 ********* 2025-05-28 19:26:34.249920 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.249924 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.249928 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.249932 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.249936 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.249939 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.249943 | orchestrator | 2025-05-28 19:26:34.249947 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-28 19:26:34.249951 | orchestrator | Wednesday 28 May 2025 19:22:51 +0000 (0:00:00.766) 0:09:59.460 ********* 2025-05-28 19:26:34.249955 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.249961 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.249965 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.249969 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.249990 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.249994 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.249998 | orchestrator | 2025-05-28 19:26:34.250002 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-28 19:26:34.250006 | orchestrator | Wednesday 28 May 2025 19:22:52 +0000 (0:00:01.299) 0:10:00.759 ********* 2025-05-28 19:26:34.250012 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250033 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250037 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250040 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.250044 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.250048 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.250052 | orchestrator | 2025-05-28 19:26:34.250056 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-28 19:26:34.250059 | orchestrator | Wednesday 28 May 2025 19:22:53 +0000 (0:00:01.241) 0:10:02.001 ********* 2025-05-28 19:26:34.250063 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250067 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250071 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250075 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.250079 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.250082 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.250086 | orchestrator | 2025-05-28 19:26:34.250090 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-28 19:26:34.250094 | orchestrator | Wednesday 28 May 2025 19:22:54 +0000 (0:00:01.117) 0:10:03.118 ********* 2025-05-28 19:26:34.250098 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250101 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.250105 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250109 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.250113 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250117 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.250120 | orchestrator | 2025-05-28 19:26:34.250124 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-28 19:26:34.250128 | orchestrator | Wednesday 28 May 2025 19:22:55 +0000 (0:00:01.023) 0:10:04.141 ********* 2025-05-28 19:26:34.250132 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250136 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250139 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250143 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250147 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250151 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250154 | orchestrator | 2025-05-28 19:26:34.250158 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-28 19:26:34.250162 | orchestrator | Wednesday 28 May 2025 19:22:56 +0000 (0:00:00.679) 0:10:04.820 ********* 2025-05-28 19:26:34.250166 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250170 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250173 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250177 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250181 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250185 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250189 | orchestrator | 2025-05-28 19:26:34.250192 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-28 19:26:34.250196 | orchestrator | Wednesday 28 May 2025 19:22:57 +0000 (0:00:00.947) 0:10:05.768 ********* 2025-05-28 19:26:34.250200 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250204 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250208 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250212 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250215 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250222 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250226 | orchestrator | 2025-05-28 19:26:34.250230 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-28 19:26:34.250234 | orchestrator | Wednesday 28 May 2025 19:22:58 +0000 (0:00:00.648) 0:10:06.416 ********* 2025-05-28 19:26:34.250238 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250242 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250245 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250249 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250253 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250257 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250260 | orchestrator | 2025-05-28 19:26:34.250264 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-28 19:26:34.250268 | orchestrator | Wednesday 28 May 2025 19:22:58 +0000 (0:00:00.810) 0:10:07.227 ********* 2025-05-28 19:26:34.250272 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250276 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250280 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250283 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250287 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250291 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250295 | orchestrator | 2025-05-28 19:26:34.250299 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-28 19:26:34.250303 | orchestrator | Wednesday 28 May 2025 19:22:59 +0000 (0:00:00.655) 0:10:07.883 ********* 2025-05-28 19:26:34.250307 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.250310 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.250325 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.250330 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.250334 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.250338 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.250342 | orchestrator | 2025-05-28 19:26:34.250345 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-28 19:26:34.250349 | orchestrator | Wednesday 28 May 2025 19:23:00 +0000 (0:00:01.275) 0:10:09.159 ********* 2025-05-28 19:26:34.250353 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250357 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250361 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250365 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250368 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250372 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250376 | orchestrator | 2025-05-28 19:26:34.250380 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-28 19:26:34.250384 | orchestrator | Wednesday 28 May 2025 19:23:01 +0000 (0:00:00.646) 0:10:09.805 ********* 2025-05-28 19:26:34.250387 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.250391 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.250395 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.250399 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250403 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250406 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250410 | orchestrator | 2025-05-28 19:26:34.250414 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-28 19:26:34.250420 | orchestrator | Wednesday 28 May 2025 19:23:02 +0000 (0:00:00.849) 0:10:10.655 ********* 2025-05-28 19:26:34.250424 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250428 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250431 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250435 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.250439 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.250443 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.250447 | orchestrator | 2025-05-28 19:26:34.250450 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-28 19:26:34.250454 | orchestrator | Wednesday 28 May 2025 19:23:02 +0000 (0:00:00.679) 0:10:11.334 ********* 2025-05-28 19:26:34.250461 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250465 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250469 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250473 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.250476 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.250480 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.250484 | orchestrator | 2025-05-28 19:26:34.250488 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-28 19:26:34.250492 | orchestrator | Wednesday 28 May 2025 19:23:03 +0000 (0:00:00.857) 0:10:12.192 ********* 2025-05-28 19:26:34.250495 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250499 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250503 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250507 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.250511 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.250514 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.250518 | orchestrator | 2025-05-28 19:26:34.250522 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-28 19:26:34.250526 | orchestrator | Wednesday 28 May 2025 19:23:04 +0000 (0:00:00.666) 0:10:12.858 ********* 2025-05-28 19:26:34.250530 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250534 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250537 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250541 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250545 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250549 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250552 | orchestrator | 2025-05-28 19:26:34.250556 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-28 19:26:34.250560 | orchestrator | Wednesday 28 May 2025 19:23:05 +0000 (0:00:00.878) 0:10:13.736 ********* 2025-05-28 19:26:34.250564 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250568 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250571 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250575 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250579 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250583 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250587 | orchestrator | 2025-05-28 19:26:34.250590 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-28 19:26:34.250594 | orchestrator | Wednesday 28 May 2025 19:23:06 +0000 (0:00:00.673) 0:10:14.410 ********* 2025-05-28 19:26:34.250598 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.250602 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.250606 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.250609 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250613 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250617 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250621 | orchestrator | 2025-05-28 19:26:34.250624 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-28 19:26:34.250628 | orchestrator | Wednesday 28 May 2025 19:23:06 +0000 (0:00:00.900) 0:10:15.310 ********* 2025-05-28 19:26:34.250632 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.250636 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.250640 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.250643 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.250647 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.250651 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.250655 | orchestrator | 2025-05-28 19:26:34.250658 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.250662 | orchestrator | Wednesday 28 May 2025 19:23:07 +0000 (0:00:00.815) 0:10:16.125 ********* 2025-05-28 19:26:34.250666 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250670 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250674 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250680 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250684 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250688 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250692 | orchestrator | 2025-05-28 19:26:34.250695 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.250699 | orchestrator | Wednesday 28 May 2025 19:23:08 +0000 (0:00:00.991) 0:10:17.117 ********* 2025-05-28 19:26:34.250712 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250717 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250721 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250725 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250728 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250732 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250736 | orchestrator | 2025-05-28 19:26:34.250739 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.250743 | orchestrator | Wednesday 28 May 2025 19:23:09 +0000 (0:00:00.762) 0:10:17.879 ********* 2025-05-28 19:26:34.250747 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250751 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250754 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250758 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250762 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250765 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250769 | orchestrator | 2025-05-28 19:26:34.250773 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.250777 | orchestrator | Wednesday 28 May 2025 19:23:10 +0000 (0:00:00.867) 0:10:18.748 ********* 2025-05-28 19:26:34.250780 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250784 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250788 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250792 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250795 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250799 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250803 | orchestrator | 2025-05-28 19:26:34.250808 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.250812 | orchestrator | Wednesday 28 May 2025 19:23:11 +0000 (0:00:00.662) 0:10:19.410 ********* 2025-05-28 19:26:34.250816 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250819 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250823 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250827 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250831 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250834 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250838 | orchestrator | 2025-05-28 19:26:34.250842 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.250846 | orchestrator | Wednesday 28 May 2025 19:23:12 +0000 (0:00:00.961) 0:10:20.372 ********* 2025-05-28 19:26:34.250849 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250853 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250857 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250860 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250864 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250868 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250872 | orchestrator | 2025-05-28 19:26:34.250875 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.250879 | orchestrator | Wednesday 28 May 2025 19:23:12 +0000 (0:00:00.666) 0:10:21.038 ********* 2025-05-28 19:26:34.250883 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250887 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250890 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250894 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250898 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250901 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250907 | orchestrator | 2025-05-28 19:26:34.250911 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.250915 | orchestrator | Wednesday 28 May 2025 19:23:13 +0000 (0:00:00.860) 0:10:21.899 ********* 2025-05-28 19:26:34.250919 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250923 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250926 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250930 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250934 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250937 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250941 | orchestrator | 2025-05-28 19:26:34.250945 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.250949 | orchestrator | Wednesday 28 May 2025 19:23:14 +0000 (0:00:00.686) 0:10:22.585 ********* 2025-05-28 19:26:34.250952 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250956 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.250960 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.250964 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.250967 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.250979 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.250983 | orchestrator | 2025-05-28 19:26:34.250987 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.250991 | orchestrator | Wednesday 28 May 2025 19:23:15 +0000 (0:00:00.911) 0:10:23.496 ********* 2025-05-28 19:26:34.250995 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.250998 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251002 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251006 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251010 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251013 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251017 | orchestrator | 2025-05-28 19:26:34.251021 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.251025 | orchestrator | Wednesday 28 May 2025 19:23:15 +0000 (0:00:00.709) 0:10:24.206 ********* 2025-05-28 19:26:34.251029 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251032 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251036 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251040 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251044 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251047 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251051 | orchestrator | 2025-05-28 19:26:34.251055 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.251059 | orchestrator | Wednesday 28 May 2025 19:23:16 +0000 (0:00:00.920) 0:10:25.126 ********* 2025-05-28 19:26:34.251063 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251066 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251070 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251085 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251089 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251093 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251097 | orchestrator | 2025-05-28 19:26:34.251101 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.251105 | orchestrator | Wednesday 28 May 2025 19:23:17 +0000 (0:00:00.665) 0:10:25.792 ********* 2025-05-28 19:26:34.251108 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.251112 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-28 19:26:34.251116 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251120 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.251123 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-28 19:26:34.251127 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251131 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.251137 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-28 19:26:34.251141 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251144 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.251148 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.251152 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251156 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.251159 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.251163 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251169 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.251173 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.251176 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251180 | orchestrator | 2025-05-28 19:26:34.251184 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.251188 | orchestrator | Wednesday 28 May 2025 19:23:18 +0000 (0:00:00.958) 0:10:26.750 ********* 2025-05-28 19:26:34.251192 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-28 19:26:34.251196 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-28 19:26:34.251199 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251203 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-28 19:26:34.251207 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-28 19:26:34.251211 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251214 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-28 19:26:34.251218 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-28 19:26:34.251222 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251226 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-28 19:26:34.251229 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-28 19:26:34.251233 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251237 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-28 19:26:34.251241 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-28 19:26:34.251245 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251248 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-28 19:26:34.251252 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-28 19:26:34.251256 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251259 | orchestrator | 2025-05-28 19:26:34.251263 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.251267 | orchestrator | Wednesday 28 May 2025 19:23:19 +0000 (0:00:00.752) 0:10:27.503 ********* 2025-05-28 19:26:34.251271 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251275 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251278 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251282 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251286 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251290 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251293 | orchestrator | 2025-05-28 19:26:34.251297 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.251301 | orchestrator | Wednesday 28 May 2025 19:23:20 +0000 (0:00:00.883) 0:10:28.387 ********* 2025-05-28 19:26:34.251305 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251308 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251312 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251316 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251320 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251323 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251327 | orchestrator | 2025-05-28 19:26:34.251331 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.251337 | orchestrator | Wednesday 28 May 2025 19:23:20 +0000 (0:00:00.706) 0:10:29.094 ********* 2025-05-28 19:26:34.251341 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251345 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251348 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251352 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251356 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251360 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251363 | orchestrator | 2025-05-28 19:26:34.251367 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.251371 | orchestrator | Wednesday 28 May 2025 19:23:21 +0000 (0:00:00.915) 0:10:30.010 ********* 2025-05-28 19:26:34.251375 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251378 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251382 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251386 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251390 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251393 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251397 | orchestrator | 2025-05-28 19:26:34.251401 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.251405 | orchestrator | Wednesday 28 May 2025 19:23:22 +0000 (0:00:00.679) 0:10:30.689 ********* 2025-05-28 19:26:34.251418 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251423 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251427 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251430 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251434 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251438 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251442 | orchestrator | 2025-05-28 19:26:34.251446 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.251449 | orchestrator | Wednesday 28 May 2025 19:23:23 +0000 (0:00:00.894) 0:10:31.583 ********* 2025-05-28 19:26:34.251453 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251457 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251461 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251464 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251468 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251472 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251476 | orchestrator | 2025-05-28 19:26:34.251479 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.251483 | orchestrator | Wednesday 28 May 2025 19:23:24 +0000 (0:00:00.791) 0:10:32.375 ********* 2025-05-28 19:26:34.251487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.251491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.251495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.251500 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251504 | orchestrator | 2025-05-28 19:26:34.251508 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.251512 | orchestrator | Wednesday 28 May 2025 19:23:24 +0000 (0:00:00.448) 0:10:32.823 ********* 2025-05-28 19:26:34.251516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.251520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.251523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.251527 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251531 | orchestrator | 2025-05-28 19:26:34.251535 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.251539 | orchestrator | Wednesday 28 May 2025 19:23:24 +0000 (0:00:00.442) 0:10:33.266 ********* 2025-05-28 19:26:34.251542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.251549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.251552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.251556 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251560 | orchestrator | 2025-05-28 19:26:34.251564 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.251568 | orchestrator | Wednesday 28 May 2025 19:23:25 +0000 (0:00:00.702) 0:10:33.968 ********* 2025-05-28 19:26:34.251572 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251575 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251579 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251583 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251587 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251590 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251594 | orchestrator | 2025-05-28 19:26:34.251598 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.251602 | orchestrator | Wednesday 28 May 2025 19:23:26 +0000 (0:00:00.968) 0:10:34.937 ********* 2025-05-28 19:26:34.251605 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.251609 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251613 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.251617 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251621 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.251624 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.251628 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251632 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251636 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.251639 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251643 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.251647 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251650 | orchestrator | 2025-05-28 19:26:34.251654 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.251658 | orchestrator | Wednesday 28 May 2025 19:23:27 +0000 (0:00:00.820) 0:10:35.757 ********* 2025-05-28 19:26:34.251662 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251666 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251669 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251673 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251677 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251680 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251684 | orchestrator | 2025-05-28 19:26:34.251688 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.251692 | orchestrator | Wednesday 28 May 2025 19:23:28 +0000 (0:00:00.948) 0:10:36.705 ********* 2025-05-28 19:26:34.251696 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251699 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251703 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251707 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251710 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251714 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251718 | orchestrator | 2025-05-28 19:26:34.251722 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.251726 | orchestrator | Wednesday 28 May 2025 19:23:29 +0000 (0:00:00.632) 0:10:37.338 ********* 2025-05-28 19:26:34.251729 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-28 19:26:34.251733 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251737 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-28 19:26:34.251741 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251745 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-28 19:26:34.251748 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251763 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.251771 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251775 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.251778 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251782 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.251786 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251790 | orchestrator | 2025-05-28 19:26:34.251793 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.251797 | orchestrator | Wednesday 28 May 2025 19:23:30 +0000 (0:00:01.423) 0:10:38.761 ********* 2025-05-28 19:26:34.251801 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251805 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251808 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251812 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.251816 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251820 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.251824 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251829 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.251833 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251837 | orchestrator | 2025-05-28 19:26:34.251841 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.251845 | orchestrator | Wednesday 28 May 2025 19:23:31 +0000 (0:00:00.858) 0:10:39.620 ********* 2025-05-28 19:26:34.251848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-28 19:26:34.251852 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-28 19:26:34.251856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-28 19:26:34.251860 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251863 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-28 19:26:34.251867 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-28 19:26:34.251871 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-28 19:26:34.251875 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-28 19:26:34.251878 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-28 19:26:34.251882 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-28 19:26:34.251886 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.251893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.251897 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.251904 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:26:34.251908 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:26:34.251912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:26:34.251916 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251919 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:26:34.251927 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:26:34.251930 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:26:34.251934 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251938 | orchestrator | 2025-05-28 19:26:34.251942 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.251945 | orchestrator | Wednesday 28 May 2025 19:23:33 +0000 (0:00:01.775) 0:10:41.396 ********* 2025-05-28 19:26:34.251952 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.251955 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.251959 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.251963 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.251967 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.251989 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.251993 | orchestrator | 2025-05-28 19:26:34.251997 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-28 19:26:34.252001 | orchestrator | Wednesday 28 May 2025 19:23:34 +0000 (0:00:01.441) 0:10:42.838 ********* 2025-05-28 19:26:34.252005 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.252008 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.252012 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.252016 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.252020 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252023 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.252027 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252031 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.252035 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252038 | orchestrator | 2025-05-28 19:26:34.252042 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-28 19:26:34.252046 | orchestrator | Wednesday 28 May 2025 19:23:35 +0000 (0:00:01.393) 0:10:44.232 ********* 2025-05-28 19:26:34.252050 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.252054 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.252057 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.252061 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252065 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252068 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252072 | orchestrator | 2025-05-28 19:26:34.252076 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-28 19:26:34.252082 | orchestrator | Wednesday 28 May 2025 19:23:37 +0000 (0:00:01.409) 0:10:45.641 ********* 2025-05-28 19:26:34.252086 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:26:34.252090 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:26:34.252093 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:26:34.252097 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252101 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252105 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252109 | orchestrator | 2025-05-28 19:26:34.252113 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-28 19:26:34.252116 | orchestrator | Wednesday 28 May 2025 19:23:38 +0000 (0:00:01.346) 0:10:46.988 ********* 2025-05-28 19:26:34.252120 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.252124 | orchestrator | 2025-05-28 19:26:34.252128 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-28 19:26:34.252132 | orchestrator | Wednesday 28 May 2025 19:23:41 +0000 (0:00:03.036) 0:10:50.025 ********* 2025-05-28 19:26:34.252136 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.252139 | orchestrator | 2025-05-28 19:26:34.252143 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-28 19:26:34.252147 | orchestrator | Wednesday 28 May 2025 19:23:43 +0000 (0:00:01.590) 0:10:51.615 ********* 2025-05-28 19:26:34.252151 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.252155 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.252158 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.252162 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.252166 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.252172 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.252176 | orchestrator | 2025-05-28 19:26:34.252179 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-28 19:26:34.252183 | orchestrator | Wednesday 28 May 2025 19:23:44 +0000 (0:00:01.524) 0:10:53.139 ********* 2025-05-28 19:26:34.252189 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.252193 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.252197 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.252201 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.252205 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.252208 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.252212 | orchestrator | 2025-05-28 19:26:34.252216 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-28 19:26:34.252220 | orchestrator | Wednesday 28 May 2025 19:23:46 +0000 (0:00:01.480) 0:10:54.620 ********* 2025-05-28 19:26:34.252224 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.252228 | orchestrator | 2025-05-28 19:26:34.252232 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-28 19:26:34.252236 | orchestrator | Wednesday 28 May 2025 19:23:47 +0000 (0:00:01.340) 0:10:55.960 ********* 2025-05-28 19:26:34.252240 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.252243 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.252247 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.252251 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.252255 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.252259 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.252263 | orchestrator | 2025-05-28 19:26:34.252266 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-28 19:26:34.252270 | orchestrator | Wednesday 28 May 2025 19:23:49 +0000 (0:00:02.171) 0:10:58.132 ********* 2025-05-28 19:26:34.252274 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.252278 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.252282 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.252285 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.252289 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.252293 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.252297 | orchestrator | 2025-05-28 19:26:34.252301 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-28 19:26:34.252304 | orchestrator | Wednesday 28 May 2025 19:23:54 +0000 (0:00:04.283) 0:11:02.415 ********* 2025-05-28 19:26:34.252308 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.252312 | orchestrator | 2025-05-28 19:26:34.252316 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-28 19:26:34.252320 | orchestrator | Wednesday 28 May 2025 19:23:55 +0000 (0:00:01.675) 0:11:04.090 ********* 2025-05-28 19:26:34.252324 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.252327 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.252331 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.252335 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252339 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252343 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252346 | orchestrator | 2025-05-28 19:26:34.252350 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-28 19:26:34.252354 | orchestrator | Wednesday 28 May 2025 19:23:56 +0000 (0:00:00.787) 0:11:04.878 ********* 2025-05-28 19:26:34.252358 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:26:34.252362 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.252366 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:26:34.252369 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:26:34.252373 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.252377 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.252381 | orchestrator | 2025-05-28 19:26:34.252385 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-28 19:26:34.252388 | orchestrator | Wednesday 28 May 2025 19:23:59 +0000 (0:00:02.469) 0:11:07.347 ********* 2025-05-28 19:26:34.252394 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:26:34.252398 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:26:34.252402 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:26:34.252406 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252410 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252413 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252417 | orchestrator | 2025-05-28 19:26:34.252421 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-28 19:26:34.252425 | orchestrator | 2025-05-28 19:26:34.252429 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-28 19:26:34.252435 | orchestrator | Wednesday 28 May 2025 19:24:02 +0000 (0:00:03.130) 0:11:10.478 ********* 2025-05-28 19:26:34.252439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.252443 | orchestrator | 2025-05-28 19:26:34.252447 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-28 19:26:34.252450 | orchestrator | Wednesday 28 May 2025 19:24:02 +0000 (0:00:00.556) 0:11:11.035 ********* 2025-05-28 19:26:34.252454 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252458 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252462 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252466 | orchestrator | 2025-05-28 19:26:34.252469 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-28 19:26:34.252473 | orchestrator | Wednesday 28 May 2025 19:24:03 +0000 (0:00:00.619) 0:11:11.655 ********* 2025-05-28 19:26:34.252477 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252481 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252485 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252488 | orchestrator | 2025-05-28 19:26:34.252492 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-28 19:26:34.252496 | orchestrator | Wednesday 28 May 2025 19:24:04 +0000 (0:00:00.706) 0:11:12.361 ********* 2025-05-28 19:26:34.252500 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252504 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252509 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252513 | orchestrator | 2025-05-28 19:26:34.252517 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-28 19:26:34.252521 | orchestrator | Wednesday 28 May 2025 19:24:04 +0000 (0:00:00.707) 0:11:13.069 ********* 2025-05-28 19:26:34.252525 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252528 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252532 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252536 | orchestrator | 2025-05-28 19:26:34.252540 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-28 19:26:34.252544 | orchestrator | Wednesday 28 May 2025 19:24:05 +0000 (0:00:01.207) 0:11:14.276 ********* 2025-05-28 19:26:34.252548 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252552 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252555 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252559 | orchestrator | 2025-05-28 19:26:34.252563 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-28 19:26:34.252567 | orchestrator | Wednesday 28 May 2025 19:24:06 +0000 (0:00:00.503) 0:11:14.780 ********* 2025-05-28 19:26:34.252571 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252575 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252578 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252582 | orchestrator | 2025-05-28 19:26:34.252586 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-28 19:26:34.252590 | orchestrator | Wednesday 28 May 2025 19:24:06 +0000 (0:00:00.473) 0:11:15.254 ********* 2025-05-28 19:26:34.252594 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252597 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252601 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252605 | orchestrator | 2025-05-28 19:26:34.252611 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-28 19:26:34.252615 | orchestrator | Wednesday 28 May 2025 19:24:07 +0000 (0:00:00.442) 0:11:15.697 ********* 2025-05-28 19:26:34.252619 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252623 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252627 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252631 | orchestrator | 2025-05-28 19:26:34.252635 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-28 19:26:34.252638 | orchestrator | Wednesday 28 May 2025 19:24:08 +0000 (0:00:00.669) 0:11:16.367 ********* 2025-05-28 19:26:34.252642 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252646 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252650 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252654 | orchestrator | 2025-05-28 19:26:34.252657 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-28 19:26:34.252661 | orchestrator | Wednesday 28 May 2025 19:24:08 +0000 (0:00:00.341) 0:11:16.708 ********* 2025-05-28 19:26:34.252665 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252669 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252673 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252676 | orchestrator | 2025-05-28 19:26:34.252680 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-28 19:26:34.252684 | orchestrator | Wednesday 28 May 2025 19:24:08 +0000 (0:00:00.364) 0:11:17.073 ********* 2025-05-28 19:26:34.252688 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252692 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252696 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252699 | orchestrator | 2025-05-28 19:26:34.252703 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-28 19:26:34.252707 | orchestrator | Wednesday 28 May 2025 19:24:09 +0000 (0:00:00.711) 0:11:17.784 ********* 2025-05-28 19:26:34.252711 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252715 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252718 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252722 | orchestrator | 2025-05-28 19:26:34.252726 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-28 19:26:34.252730 | orchestrator | Wednesday 28 May 2025 19:24:10 +0000 (0:00:00.605) 0:11:18.390 ********* 2025-05-28 19:26:34.252734 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252737 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252741 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252745 | orchestrator | 2025-05-28 19:26:34.252749 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-28 19:26:34.252753 | orchestrator | Wednesday 28 May 2025 19:24:10 +0000 (0:00:00.383) 0:11:18.774 ********* 2025-05-28 19:26:34.252757 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252760 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252764 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252768 | orchestrator | 2025-05-28 19:26:34.252772 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-28 19:26:34.252777 | orchestrator | Wednesday 28 May 2025 19:24:10 +0000 (0:00:00.345) 0:11:19.119 ********* 2025-05-28 19:26:34.252782 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252785 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252789 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252793 | orchestrator | 2025-05-28 19:26:34.252797 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-28 19:26:34.252801 | orchestrator | Wednesday 28 May 2025 19:24:11 +0000 (0:00:00.342) 0:11:19.462 ********* 2025-05-28 19:26:34.252805 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252808 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252812 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252816 | orchestrator | 2025-05-28 19:26:34.252820 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-28 19:26:34.252827 | orchestrator | Wednesday 28 May 2025 19:24:11 +0000 (0:00:00.625) 0:11:20.088 ********* 2025-05-28 19:26:34.252831 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252835 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252839 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252843 | orchestrator | 2025-05-28 19:26:34.252846 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-28 19:26:34.252850 | orchestrator | Wednesday 28 May 2025 19:24:12 +0000 (0:00:00.387) 0:11:20.475 ********* 2025-05-28 19:26:34.252854 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252858 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252862 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252866 | orchestrator | 2025-05-28 19:26:34.252871 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-28 19:26:34.252875 | orchestrator | Wednesday 28 May 2025 19:24:12 +0000 (0:00:00.316) 0:11:20.792 ********* 2025-05-28 19:26:34.252879 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252882 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252886 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252890 | orchestrator | 2025-05-28 19:26:34.252894 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-28 19:26:34.252898 | orchestrator | Wednesday 28 May 2025 19:24:12 +0000 (0:00:00.336) 0:11:21.128 ********* 2025-05-28 19:26:34.252902 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.252905 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.252909 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.252913 | orchestrator | 2025-05-28 19:26:34.252917 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.252921 | orchestrator | Wednesday 28 May 2025 19:24:13 +0000 (0:00:00.625) 0:11:21.754 ********* 2025-05-28 19:26:34.252924 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252928 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252932 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252936 | orchestrator | 2025-05-28 19:26:34.252940 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.252943 | orchestrator | Wednesday 28 May 2025 19:24:13 +0000 (0:00:00.375) 0:11:22.129 ********* 2025-05-28 19:26:34.252947 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252951 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252955 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252959 | orchestrator | 2025-05-28 19:26:34.252962 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.252966 | orchestrator | Wednesday 28 May 2025 19:24:14 +0000 (0:00:00.377) 0:11:22.506 ********* 2025-05-28 19:26:34.252977 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.252981 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.252985 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.252989 | orchestrator | 2025-05-28 19:26:34.252993 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.252997 | orchestrator | Wednesday 28 May 2025 19:24:14 +0000 (0:00:00.347) 0:11:22.854 ********* 2025-05-28 19:26:34.253000 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253004 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253008 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253012 | orchestrator | 2025-05-28 19:26:34.253015 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.253019 | orchestrator | Wednesday 28 May 2025 19:24:15 +0000 (0:00:00.612) 0:11:23.467 ********* 2025-05-28 19:26:34.253023 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253027 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253030 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253034 | orchestrator | 2025-05-28 19:26:34.253038 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.253042 | orchestrator | Wednesday 28 May 2025 19:24:15 +0000 (0:00:00.365) 0:11:23.833 ********* 2025-05-28 19:26:34.253048 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253052 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253056 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253059 | orchestrator | 2025-05-28 19:26:34.253063 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.253067 | orchestrator | Wednesday 28 May 2025 19:24:15 +0000 (0:00:00.343) 0:11:24.176 ********* 2025-05-28 19:26:34.253071 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253075 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253078 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253082 | orchestrator | 2025-05-28 19:26:34.253086 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.253090 | orchestrator | Wednesday 28 May 2025 19:24:16 +0000 (0:00:00.331) 0:11:24.508 ********* 2025-05-28 19:26:34.253093 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253097 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253101 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253105 | orchestrator | 2025-05-28 19:26:34.253109 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.253112 | orchestrator | Wednesday 28 May 2025 19:24:16 +0000 (0:00:00.606) 0:11:25.114 ********* 2025-05-28 19:26:34.253116 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253120 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253124 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253128 | orchestrator | 2025-05-28 19:26:34.253133 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.253137 | orchestrator | Wednesday 28 May 2025 19:24:17 +0000 (0:00:00.346) 0:11:25.461 ********* 2025-05-28 19:26:34.253141 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253145 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253149 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253152 | orchestrator | 2025-05-28 19:26:34.253156 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.253160 | orchestrator | Wednesday 28 May 2025 19:24:17 +0000 (0:00:00.334) 0:11:25.795 ********* 2025-05-28 19:26:34.253164 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253168 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253171 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253175 | orchestrator | 2025-05-28 19:26:34.253179 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.253183 | orchestrator | Wednesday 28 May 2025 19:24:17 +0000 (0:00:00.337) 0:11:26.133 ********* 2025-05-28 19:26:34.253187 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253190 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253194 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253198 | orchestrator | 2025-05-28 19:26:34.253202 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.253207 | orchestrator | Wednesday 28 May 2025 19:24:18 +0000 (0:00:00.662) 0:11:26.795 ********* 2025-05-28 19:26:34.253211 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.253215 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.253219 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253222 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.253226 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.253230 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253234 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.253238 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.253241 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253245 | orchestrator | 2025-05-28 19:26:34.253249 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.253255 | orchestrator | Wednesday 28 May 2025 19:24:18 +0000 (0:00:00.395) 0:11:27.191 ********* 2025-05-28 19:26:34.253259 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-28 19:26:34.253263 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-28 19:26:34.253266 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253270 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-28 19:26:34.253274 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-28 19:26:34.253278 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253282 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-28 19:26:34.253285 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-28 19:26:34.253289 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253293 | orchestrator | 2025-05-28 19:26:34.253297 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.253301 | orchestrator | Wednesday 28 May 2025 19:24:19 +0000 (0:00:00.428) 0:11:27.619 ********* 2025-05-28 19:26:34.253304 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253308 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253312 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253316 | orchestrator | 2025-05-28 19:26:34.253320 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.253323 | orchestrator | Wednesday 28 May 2025 19:24:19 +0000 (0:00:00.349) 0:11:27.968 ********* 2025-05-28 19:26:34.253327 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253331 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253335 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253339 | orchestrator | 2025-05-28 19:26:34.253342 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.253346 | orchestrator | Wednesday 28 May 2025 19:24:20 +0000 (0:00:00.639) 0:11:28.608 ********* 2025-05-28 19:26:34.253350 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253354 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253358 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253361 | orchestrator | 2025-05-28 19:26:34.253365 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.253369 | orchestrator | Wednesday 28 May 2025 19:24:20 +0000 (0:00:00.355) 0:11:28.964 ********* 2025-05-28 19:26:34.253373 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253377 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253380 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253384 | orchestrator | 2025-05-28 19:26:34.253388 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.253392 | orchestrator | Wednesday 28 May 2025 19:24:20 +0000 (0:00:00.367) 0:11:29.331 ********* 2025-05-28 19:26:34.253395 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253399 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253403 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253407 | orchestrator | 2025-05-28 19:26:34.253410 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.253414 | orchestrator | Wednesday 28 May 2025 19:24:21 +0000 (0:00:00.341) 0:11:29.672 ********* 2025-05-28 19:26:34.253418 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253422 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253426 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253429 | orchestrator | 2025-05-28 19:26:34.253433 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.253437 | orchestrator | Wednesday 28 May 2025 19:24:21 +0000 (0:00:00.657) 0:11:30.330 ********* 2025-05-28 19:26:34.253441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.253446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.253453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.253457 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253461 | orchestrator | 2025-05-28 19:26:34.253465 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.253468 | orchestrator | Wednesday 28 May 2025 19:24:22 +0000 (0:00:00.485) 0:11:30.816 ********* 2025-05-28 19:26:34.253472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.253476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.253480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.253484 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253487 | orchestrator | 2025-05-28 19:26:34.253491 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.253495 | orchestrator | Wednesday 28 May 2025 19:24:22 +0000 (0:00:00.471) 0:11:31.287 ********* 2025-05-28 19:26:34.253499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.253503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.253506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.253510 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253514 | orchestrator | 2025-05-28 19:26:34.253520 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.253524 | orchestrator | Wednesday 28 May 2025 19:24:23 +0000 (0:00:00.492) 0:11:31.779 ********* 2025-05-28 19:26:34.253527 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253531 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253535 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253539 | orchestrator | 2025-05-28 19:26:34.253543 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.253546 | orchestrator | Wednesday 28 May 2025 19:24:23 +0000 (0:00:00.378) 0:11:32.158 ********* 2025-05-28 19:26:34.253550 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.253554 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253558 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.253562 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253566 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.253569 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253573 | orchestrator | 2025-05-28 19:26:34.253577 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.253581 | orchestrator | Wednesday 28 May 2025 19:24:24 +0000 (0:00:00.472) 0:11:32.630 ********* 2025-05-28 19:26:34.253584 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253588 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253592 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253596 | orchestrator | 2025-05-28 19:26:34.253600 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.253603 | orchestrator | Wednesday 28 May 2025 19:24:24 +0000 (0:00:00.605) 0:11:33.236 ********* 2025-05-28 19:26:34.253607 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253611 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253615 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253619 | orchestrator | 2025-05-28 19:26:34.253622 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.253626 | orchestrator | Wednesday 28 May 2025 19:24:25 +0000 (0:00:00.352) 0:11:33.589 ********* 2025-05-28 19:26:34.253630 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.253634 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253637 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.253641 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253645 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.253649 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253656 | orchestrator | 2025-05-28 19:26:34.253660 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.253664 | orchestrator | Wednesday 28 May 2025 19:24:25 +0000 (0:00:00.453) 0:11:34.042 ********* 2025-05-28 19:26:34.253668 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.253671 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253675 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.253679 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253683 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.253687 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253691 | orchestrator | 2025-05-28 19:26:34.253694 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.253698 | orchestrator | Wednesday 28 May 2025 19:24:26 +0000 (0:00:00.362) 0:11:34.405 ********* 2025-05-28 19:26:34.253702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.253706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.253710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.253713 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:26:34.253721 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:26:34.253725 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:26:34.253728 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253732 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:26:34.253736 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:26:34.253742 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:26:34.253746 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253749 | orchestrator | 2025-05-28 19:26:34.253753 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.253757 | orchestrator | Wednesday 28 May 2025 19:24:27 +0000 (0:00:00.945) 0:11:35.351 ********* 2025-05-28 19:26:34.253761 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253765 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253769 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253772 | orchestrator | 2025-05-28 19:26:34.253776 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-28 19:26:34.253780 | orchestrator | Wednesday 28 May 2025 19:24:27 +0000 (0:00:00.531) 0:11:35.883 ********* 2025-05-28 19:26:34.253784 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.253788 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253791 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.253795 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253799 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.253803 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253806 | orchestrator | 2025-05-28 19:26:34.253810 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-28 19:26:34.253814 | orchestrator | Wednesday 28 May 2025 19:24:28 +0000 (0:00:00.911) 0:11:36.794 ********* 2025-05-28 19:26:34.253818 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253823 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253827 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253831 | orchestrator | 2025-05-28 19:26:34.253835 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-28 19:26:34.253839 | orchestrator | Wednesday 28 May 2025 19:24:29 +0000 (0:00:00.564) 0:11:37.359 ********* 2025-05-28 19:26:34.253845 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253848 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253852 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253856 | orchestrator | 2025-05-28 19:26:34.253860 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-28 19:26:34.253864 | orchestrator | Wednesday 28 May 2025 19:24:29 +0000 (0:00:00.975) 0:11:38.335 ********* 2025-05-28 19:26:34.253868 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.253871 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.253875 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-28 19:26:34.253879 | orchestrator | 2025-05-28 19:26:34.253883 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-28 19:26:34.253887 | orchestrator | Wednesday 28 May 2025 19:24:30 +0000 (0:00:00.565) 0:11:38.900 ********* 2025-05-28 19:26:34.253891 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:26:34.253894 | orchestrator | 2025-05-28 19:26:34.253898 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-28 19:26:34.253902 | orchestrator | Wednesday 28 May 2025 19:24:32 +0000 (0:00:01.790) 0:11:40.691 ********* 2025-05-28 19:26:34.253906 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-28 19:26:34.253911 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.253915 | orchestrator | 2025-05-28 19:26:34.253918 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-28 19:26:34.253922 | orchestrator | Wednesday 28 May 2025 19:24:33 +0000 (0:00:00.705) 0:11:41.397 ********* 2025-05-28 19:26:34.253927 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 19:26:34.253935 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 19:26:34.253939 | orchestrator | 2025-05-28 19:26:34.253942 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-28 19:26:34.253946 | orchestrator | Wednesday 28 May 2025 19:24:40 +0000 (0:00:07.196) 0:11:48.593 ********* 2025-05-28 19:26:34.253950 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:26:34.253954 | orchestrator | 2025-05-28 19:26:34.253958 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-28 19:26:34.253961 | orchestrator | Wednesday 28 May 2025 19:24:43 +0000 (0:00:03.077) 0:11:51.670 ********* 2025-05-28 19:26:34.253965 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.253969 | orchestrator | 2025-05-28 19:26:34.253980 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-28 19:26:34.253984 | orchestrator | Wednesday 28 May 2025 19:24:43 +0000 (0:00:00.568) 0:11:52.239 ********* 2025-05-28 19:26:34.253988 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-28 19:26:34.253992 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-28 19:26:34.253995 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-28 19:26:34.253999 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-28 19:26:34.254005 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-28 19:26:34.254009 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-28 19:26:34.254026 | orchestrator | 2025-05-28 19:26:34.254031 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-28 19:26:34.254035 | orchestrator | Wednesday 28 May 2025 19:24:45 +0000 (0:00:01.423) 0:11:53.663 ********* 2025-05-28 19:26:34.254038 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:26:34.254042 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.254046 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 19:26:34.254050 | orchestrator | 2025-05-28 19:26:34.254054 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-28 19:26:34.254058 | orchestrator | Wednesday 28 May 2025 19:24:47 +0000 (0:00:01.887) 0:11:55.550 ********* 2025-05-28 19:26:34.254061 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 19:26:34.254065 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.254069 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254073 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 19:26:34.254077 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.254081 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254084 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 19:26:34.254090 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.254094 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254098 | orchestrator | 2025-05-28 19:26:34.254102 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-28 19:26:34.254106 | orchestrator | Wednesday 28 May 2025 19:24:48 +0000 (0:00:01.293) 0:11:56.844 ********* 2025-05-28 19:26:34.254110 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254113 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254117 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254123 | orchestrator | 2025-05-28 19:26:34.254129 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-28 19:26:34.254136 | orchestrator | Wednesday 28 May 2025 19:24:49 +0000 (0:00:00.588) 0:11:57.432 ********* 2025-05-28 19:26:34.254143 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.254149 | orchestrator | 2025-05-28 19:26:34.254156 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-28 19:26:34.254163 | orchestrator | Wednesday 28 May 2025 19:24:49 +0000 (0:00:00.549) 0:11:57.982 ********* 2025-05-28 19:26:34.254167 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.254170 | orchestrator | 2025-05-28 19:26:34.254174 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-28 19:26:34.254178 | orchestrator | Wednesday 28 May 2025 19:24:50 +0000 (0:00:00.850) 0:11:58.832 ********* 2025-05-28 19:26:34.254182 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254186 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254189 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254193 | orchestrator | 2025-05-28 19:26:34.254197 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-28 19:26:34.254201 | orchestrator | Wednesday 28 May 2025 19:24:51 +0000 (0:00:01.280) 0:12:00.113 ********* 2025-05-28 19:26:34.254204 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254208 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254212 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254216 | orchestrator | 2025-05-28 19:26:34.254219 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-28 19:26:34.254223 | orchestrator | Wednesday 28 May 2025 19:24:52 +0000 (0:00:01.215) 0:12:01.328 ********* 2025-05-28 19:26:34.254227 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254230 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254234 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254241 | orchestrator | 2025-05-28 19:26:34.254245 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-28 19:26:34.254248 | orchestrator | Wednesday 28 May 2025 19:24:54 +0000 (0:00:01.692) 0:12:03.021 ********* 2025-05-28 19:26:34.254252 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254256 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254260 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254263 | orchestrator | 2025-05-28 19:26:34.254267 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-28 19:26:34.254271 | orchestrator | Wednesday 28 May 2025 19:24:56 +0000 (0:00:02.286) 0:12:05.307 ********* 2025-05-28 19:26:34.254275 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-28 19:26:34.254278 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-28 19:26:34.254282 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-28 19:26:34.254286 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254290 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254293 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254297 | orchestrator | 2025-05-28 19:26:34.254301 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-28 19:26:34.254305 | orchestrator | Wednesday 28 May 2025 19:25:14 +0000 (0:00:17.067) 0:12:22.374 ********* 2025-05-28 19:26:34.254308 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254312 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254316 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254320 | orchestrator | 2025-05-28 19:26:34.254323 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-28 19:26:34.254327 | orchestrator | Wednesday 28 May 2025 19:25:14 +0000 (0:00:00.664) 0:12:23.038 ********* 2025-05-28 19:26:34.254331 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.254335 | orchestrator | 2025-05-28 19:26:34.254341 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-28 19:26:34.254345 | orchestrator | Wednesday 28 May 2025 19:25:15 +0000 (0:00:00.753) 0:12:23.792 ********* 2025-05-28 19:26:34.254349 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254352 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254356 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254360 | orchestrator | 2025-05-28 19:26:34.254364 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-28 19:26:34.254367 | orchestrator | Wednesday 28 May 2025 19:25:15 +0000 (0:00:00.355) 0:12:24.148 ********* 2025-05-28 19:26:34.254371 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254375 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254379 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254382 | orchestrator | 2025-05-28 19:26:34.254386 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-28 19:26:34.254390 | orchestrator | Wednesday 28 May 2025 19:25:16 +0000 (0:00:01.181) 0:12:25.330 ********* 2025-05-28 19:26:34.254393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.254397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.254401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.254405 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254408 | orchestrator | 2025-05-28 19:26:34.254414 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-28 19:26:34.254418 | orchestrator | Wednesday 28 May 2025 19:25:17 +0000 (0:00:00.872) 0:12:26.202 ********* 2025-05-28 19:26:34.254422 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254425 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254429 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254433 | orchestrator | 2025-05-28 19:26:34.254439 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.254443 | orchestrator | Wednesday 28 May 2025 19:25:18 +0000 (0:00:00.595) 0:12:26.797 ********* 2025-05-28 19:26:34.254447 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.254451 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.254454 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.254458 | orchestrator | 2025-05-28 19:26:34.254462 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-28 19:26:34.254466 | orchestrator | 2025-05-28 19:26:34.254469 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-28 19:26:34.254473 | orchestrator | Wednesday 28 May 2025 19:25:20 +0000 (0:00:01.907) 0:12:28.705 ********* 2025-05-28 19:26:34.254477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.254481 | orchestrator | 2025-05-28 19:26:34.254484 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-28 19:26:34.254488 | orchestrator | Wednesday 28 May 2025 19:25:21 +0000 (0:00:00.697) 0:12:29.402 ********* 2025-05-28 19:26:34.254492 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254496 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254499 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254503 | orchestrator | 2025-05-28 19:26:34.254507 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-28 19:26:34.254510 | orchestrator | Wednesday 28 May 2025 19:25:21 +0000 (0:00:00.307) 0:12:29.710 ********* 2025-05-28 19:26:34.254514 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254518 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254522 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254525 | orchestrator | 2025-05-28 19:26:34.254529 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-28 19:26:34.254533 | orchestrator | Wednesday 28 May 2025 19:25:22 +0000 (0:00:00.681) 0:12:30.392 ********* 2025-05-28 19:26:34.254537 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254540 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254544 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254548 | orchestrator | 2025-05-28 19:26:34.254552 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-28 19:26:34.254555 | orchestrator | Wednesday 28 May 2025 19:25:22 +0000 (0:00:00.920) 0:12:31.312 ********* 2025-05-28 19:26:34.254559 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254563 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254566 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254570 | orchestrator | 2025-05-28 19:26:34.254574 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-28 19:26:34.254578 | orchestrator | Wednesday 28 May 2025 19:25:23 +0000 (0:00:00.668) 0:12:31.981 ********* 2025-05-28 19:26:34.254582 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254585 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254589 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254593 | orchestrator | 2025-05-28 19:26:34.254596 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-28 19:26:34.254600 | orchestrator | Wednesday 28 May 2025 19:25:23 +0000 (0:00:00.328) 0:12:32.309 ********* 2025-05-28 19:26:34.254604 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254608 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254611 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254615 | orchestrator | 2025-05-28 19:26:34.254619 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-28 19:26:34.254623 | orchestrator | Wednesday 28 May 2025 19:25:24 +0000 (0:00:00.308) 0:12:32.618 ********* 2025-05-28 19:26:34.254626 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254630 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254634 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254638 | orchestrator | 2025-05-28 19:26:34.254643 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-28 19:26:34.254647 | orchestrator | Wednesday 28 May 2025 19:25:24 +0000 (0:00:00.553) 0:12:33.172 ********* 2025-05-28 19:26:34.254651 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254655 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254658 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254662 | orchestrator | 2025-05-28 19:26:34.254666 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-28 19:26:34.254671 | orchestrator | Wednesday 28 May 2025 19:25:25 +0000 (0:00:00.332) 0:12:33.504 ********* 2025-05-28 19:26:34.254675 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254679 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254683 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254687 | orchestrator | 2025-05-28 19:26:34.254690 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-28 19:26:34.254694 | orchestrator | Wednesday 28 May 2025 19:25:25 +0000 (0:00:00.337) 0:12:33.842 ********* 2025-05-28 19:26:34.254698 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254702 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254706 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254709 | orchestrator | 2025-05-28 19:26:34.254713 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-28 19:26:34.254717 | orchestrator | Wednesday 28 May 2025 19:25:25 +0000 (0:00:00.305) 0:12:34.147 ********* 2025-05-28 19:26:34.254721 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254724 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254728 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254732 | orchestrator | 2025-05-28 19:26:34.254736 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-28 19:26:34.254739 | orchestrator | Wednesday 28 May 2025 19:25:26 +0000 (0:00:00.973) 0:12:35.121 ********* 2025-05-28 19:26:34.254743 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254747 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254753 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254757 | orchestrator | 2025-05-28 19:26:34.254760 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-28 19:26:34.254764 | orchestrator | Wednesday 28 May 2025 19:25:27 +0000 (0:00:00.301) 0:12:35.422 ********* 2025-05-28 19:26:34.254768 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254772 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254775 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254779 | orchestrator | 2025-05-28 19:26:34.254783 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-28 19:26:34.254787 | orchestrator | Wednesday 28 May 2025 19:25:27 +0000 (0:00:00.319) 0:12:35.742 ********* 2025-05-28 19:26:34.254790 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254794 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254798 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254802 | orchestrator | 2025-05-28 19:26:34.254805 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-28 19:26:34.254809 | orchestrator | Wednesday 28 May 2025 19:25:27 +0000 (0:00:00.317) 0:12:36.060 ********* 2025-05-28 19:26:34.254813 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254817 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254821 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254824 | orchestrator | 2025-05-28 19:26:34.254828 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-28 19:26:34.254832 | orchestrator | Wednesday 28 May 2025 19:25:28 +0000 (0:00:00.604) 0:12:36.665 ********* 2025-05-28 19:26:34.254836 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254839 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254843 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254847 | orchestrator | 2025-05-28 19:26:34.254851 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-28 19:26:34.254857 | orchestrator | Wednesday 28 May 2025 19:25:28 +0000 (0:00:00.328) 0:12:36.993 ********* 2025-05-28 19:26:34.254861 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254864 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254868 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254872 | orchestrator | 2025-05-28 19:26:34.254876 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-28 19:26:34.254880 | orchestrator | Wednesday 28 May 2025 19:25:28 +0000 (0:00:00.315) 0:12:37.309 ********* 2025-05-28 19:26:34.254883 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254887 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254891 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254895 | orchestrator | 2025-05-28 19:26:34.254898 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-28 19:26:34.254902 | orchestrator | Wednesday 28 May 2025 19:25:29 +0000 (0:00:00.313) 0:12:37.623 ********* 2025-05-28 19:26:34.254906 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254910 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254913 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254917 | orchestrator | 2025-05-28 19:26:34.254921 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-28 19:26:34.254925 | orchestrator | Wednesday 28 May 2025 19:25:29 +0000 (0:00:00.583) 0:12:38.206 ********* 2025-05-28 19:26:34.254928 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.254932 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.254936 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.254940 | orchestrator | 2025-05-28 19:26:34.254943 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-28 19:26:34.254947 | orchestrator | Wednesday 28 May 2025 19:25:30 +0000 (0:00:00.375) 0:12:38.582 ********* 2025-05-28 19:26:34.254951 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.254955 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.254958 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.254962 | orchestrator | 2025-05-28 19:26:34.254966 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-28 19:26:34.254991 | orchestrator | Wednesday 28 May 2025 19:25:30 +0000 (0:00:00.373) 0:12:38.955 ********* 2025-05-28 19:26:34.254996 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255000 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255004 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255008 | orchestrator | 2025-05-28 19:26:34.255012 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-28 19:26:34.255015 | orchestrator | Wednesday 28 May 2025 19:25:30 +0000 (0:00:00.322) 0:12:39.277 ********* 2025-05-28 19:26:34.255019 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255023 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255027 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255030 | orchestrator | 2025-05-28 19:26:34.255034 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-28 19:26:34.255040 | orchestrator | Wednesday 28 May 2025 19:25:31 +0000 (0:00:00.610) 0:12:39.888 ********* 2025-05-28 19:26:34.255044 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255048 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255052 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255056 | orchestrator | 2025-05-28 19:26:34.255059 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-28 19:26:34.255063 | orchestrator | Wednesday 28 May 2025 19:25:31 +0000 (0:00:00.334) 0:12:40.223 ********* 2025-05-28 19:26:34.255067 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255071 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255075 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255079 | orchestrator | 2025-05-28 19:26:34.255082 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-28 19:26:34.255086 | orchestrator | Wednesday 28 May 2025 19:25:32 +0000 (0:00:00.314) 0:12:40.537 ********* 2025-05-28 19:26:34.255093 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255096 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255100 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255104 | orchestrator | 2025-05-28 19:26:34.255108 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-28 19:26:34.255112 | orchestrator | Wednesday 28 May 2025 19:25:32 +0000 (0:00:00.306) 0:12:40.844 ********* 2025-05-28 19:26:34.255115 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255119 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255123 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255127 | orchestrator | 2025-05-28 19:26:34.255133 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-28 19:26:34.255137 | orchestrator | Wednesday 28 May 2025 19:25:33 +0000 (0:00:00.563) 0:12:41.407 ********* 2025-05-28 19:26:34.255140 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255144 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255148 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255152 | orchestrator | 2025-05-28 19:26:34.255156 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-28 19:26:34.255159 | orchestrator | Wednesday 28 May 2025 19:25:33 +0000 (0:00:00.376) 0:12:41.783 ********* 2025-05-28 19:26:34.255163 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255167 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255171 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255175 | orchestrator | 2025-05-28 19:26:34.255179 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-28 19:26:34.255182 | orchestrator | Wednesday 28 May 2025 19:25:33 +0000 (0:00:00.337) 0:12:42.122 ********* 2025-05-28 19:26:34.255186 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255190 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255194 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255198 | orchestrator | 2025-05-28 19:26:34.255201 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-28 19:26:34.255205 | orchestrator | Wednesday 28 May 2025 19:25:34 +0000 (0:00:00.322) 0:12:42.444 ********* 2025-05-28 19:26:34.255209 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255213 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255217 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255220 | orchestrator | 2025-05-28 19:26:34.255224 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-28 19:26:34.255228 | orchestrator | Wednesday 28 May 2025 19:25:34 +0000 (0:00:00.574) 0:12:43.018 ********* 2025-05-28 19:26:34.255232 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255236 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255240 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255243 | orchestrator | 2025-05-28 19:26:34.255247 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-28 19:26:34.255251 | orchestrator | Wednesday 28 May 2025 19:25:35 +0000 (0:00:00.320) 0:12:43.339 ********* 2025-05-28 19:26:34.255255 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.255259 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-28 19:26:34.255262 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255266 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.255270 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-28 19:26:34.255274 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255277 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.255281 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-28 19:26:34.255285 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255289 | orchestrator | 2025-05-28 19:26:34.255293 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-28 19:26:34.255299 | orchestrator | Wednesday 28 May 2025 19:25:35 +0000 (0:00:00.363) 0:12:43.702 ********* 2025-05-28 19:26:34.255303 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-28 19:26:34.255306 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-28 19:26:34.255310 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255314 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-28 19:26:34.255318 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-28 19:26:34.255322 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255326 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-28 19:26:34.255329 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-28 19:26:34.255333 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255337 | orchestrator | 2025-05-28 19:26:34.255341 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-28 19:26:34.255344 | orchestrator | Wednesday 28 May 2025 19:25:35 +0000 (0:00:00.370) 0:12:44.073 ********* 2025-05-28 19:26:34.255348 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255352 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255356 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255360 | orchestrator | 2025-05-28 19:26:34.255364 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-28 19:26:34.255369 | orchestrator | Wednesday 28 May 2025 19:25:36 +0000 (0:00:00.567) 0:12:44.640 ********* 2025-05-28 19:26:34.255373 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255377 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255381 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255385 | orchestrator | 2025-05-28 19:26:34.255388 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:26:34.255392 | orchestrator | Wednesday 28 May 2025 19:25:36 +0000 (0:00:00.337) 0:12:44.977 ********* 2025-05-28 19:26:34.255396 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255400 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255404 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255408 | orchestrator | 2025-05-28 19:26:34.255411 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:26:34.255415 | orchestrator | Wednesday 28 May 2025 19:25:36 +0000 (0:00:00.331) 0:12:45.309 ********* 2025-05-28 19:26:34.255419 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255423 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255427 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255431 | orchestrator | 2025-05-28 19:26:34.255434 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:26:34.255438 | orchestrator | Wednesday 28 May 2025 19:25:37 +0000 (0:00:00.315) 0:12:45.625 ********* 2025-05-28 19:26:34.255444 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255448 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255452 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255456 | orchestrator | 2025-05-28 19:26:34.255459 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:26:34.255463 | orchestrator | Wednesday 28 May 2025 19:25:37 +0000 (0:00:00.556) 0:12:46.181 ********* 2025-05-28 19:26:34.255467 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255471 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255475 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255478 | orchestrator | 2025-05-28 19:26:34.255482 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:26:34.255486 | orchestrator | Wednesday 28 May 2025 19:25:38 +0000 (0:00:00.346) 0:12:46.527 ********* 2025-05-28 19:26:34.255490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.255494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.255501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.255505 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255509 | orchestrator | 2025-05-28 19:26:34.255513 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:26:34.255517 | orchestrator | Wednesday 28 May 2025 19:25:38 +0000 (0:00:00.442) 0:12:46.970 ********* 2025-05-28 19:26:34.255521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.255525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.255528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.255532 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255536 | orchestrator | 2025-05-28 19:26:34.255540 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:26:34.255544 | orchestrator | Wednesday 28 May 2025 19:25:39 +0000 (0:00:00.440) 0:12:47.410 ********* 2025-05-28 19:26:34.255548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.255551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.255555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.255559 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255563 | orchestrator | 2025-05-28 19:26:34.255567 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.255571 | orchestrator | Wednesday 28 May 2025 19:25:39 +0000 (0:00:00.420) 0:12:47.831 ********* 2025-05-28 19:26:34.255575 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255578 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255582 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255586 | orchestrator | 2025-05-28 19:26:34.255590 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:26:34.255594 | orchestrator | Wednesday 28 May 2025 19:25:39 +0000 (0:00:00.397) 0:12:48.228 ********* 2025-05-28 19:26:34.255598 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.255601 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255605 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.255609 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255613 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.255617 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255620 | orchestrator | 2025-05-28 19:26:34.255624 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:26:34.255628 | orchestrator | Wednesday 28 May 2025 19:25:40 +0000 (0:00:00.803) 0:12:49.032 ********* 2025-05-28 19:26:34.255632 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255636 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255640 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255643 | orchestrator | 2025-05-28 19:26:34.255647 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:26:34.255651 | orchestrator | Wednesday 28 May 2025 19:25:41 +0000 (0:00:00.323) 0:12:49.356 ********* 2025-05-28 19:26:34.255655 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255659 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255663 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255666 | orchestrator | 2025-05-28 19:26:34.255670 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:26:34.255674 | orchestrator | Wednesday 28 May 2025 19:25:41 +0000 (0:00:00.321) 0:12:49.678 ********* 2025-05-28 19:26:34.255678 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:26:34.255682 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255685 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:26:34.255691 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255695 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:26:34.255699 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255705 | orchestrator | 2025-05-28 19:26:34.255709 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:26:34.255712 | orchestrator | Wednesday 28 May 2025 19:25:41 +0000 (0:00:00.455) 0:12:50.134 ********* 2025-05-28 19:26:34.255716 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.255720 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255724 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.255728 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255732 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:26:34.255736 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255739 | orchestrator | 2025-05-28 19:26:34.255743 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:26:34.255749 | orchestrator | Wednesday 28 May 2025 19:25:42 +0000 (0:00:00.614) 0:12:50.748 ********* 2025-05-28 19:26:34.255753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.255757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.255760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.255764 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:26:34.255768 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:26:34.255772 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:26:34.255779 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:26:34.255787 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:26:34.255791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:26:34.255795 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255799 | orchestrator | 2025-05-28 19:26:34.255802 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-28 19:26:34.255806 | orchestrator | Wednesday 28 May 2025 19:25:43 +0000 (0:00:00.621) 0:12:51.369 ********* 2025-05-28 19:26:34.255810 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255814 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255818 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255822 | orchestrator | 2025-05-28 19:26:34.255825 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-28 19:26:34.255829 | orchestrator | Wednesday 28 May 2025 19:25:43 +0000 (0:00:00.763) 0:12:52.133 ********* 2025-05-28 19:26:34.255833 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.255837 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255841 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.255844 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255848 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.255852 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255856 | orchestrator | 2025-05-28 19:26:34.255860 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-28 19:26:34.255863 | orchestrator | Wednesday 28 May 2025 19:25:44 +0000 (0:00:00.541) 0:12:52.674 ********* 2025-05-28 19:26:34.255867 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255871 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255875 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255879 | orchestrator | 2025-05-28 19:26:34.255883 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-28 19:26:34.255886 | orchestrator | Wednesday 28 May 2025 19:25:45 +0000 (0:00:00.755) 0:12:53.430 ********* 2025-05-28 19:26:34.255894 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.255898 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.255901 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.255905 | orchestrator | 2025-05-28 19:26:34.255909 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-28 19:26:34.255913 | orchestrator | Wednesday 28 May 2025 19:25:45 +0000 (0:00:00.593) 0:12:54.023 ********* 2025-05-28 19:26:34.255917 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.255921 | orchestrator | 2025-05-28 19:26:34.255924 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-28 19:26:34.255928 | orchestrator | Wednesday 28 May 2025 19:25:46 +0000 (0:00:00.758) 0:12:54.781 ********* 2025-05-28 19:26:34.255932 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-28 19:26:34.255936 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-28 19:26:34.255940 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-28 19:26:34.255944 | orchestrator | 2025-05-28 19:26:34.255947 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-28 19:26:34.255951 | orchestrator | Wednesday 28 May 2025 19:25:47 +0000 (0:00:00.716) 0:12:55.498 ********* 2025-05-28 19:26:34.255955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:26:34.255959 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.255963 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 19:26:34.255967 | orchestrator | 2025-05-28 19:26:34.255980 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-28 19:26:34.255984 | orchestrator | Wednesday 28 May 2025 19:25:48 +0000 (0:00:01.776) 0:12:57.274 ********* 2025-05-28 19:26:34.255990 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 19:26:34.255994 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-28 19:26:34.255998 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.256002 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 19:26:34.256006 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-28 19:26:34.256009 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.256013 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 19:26:34.256017 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-28 19:26:34.256021 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.256024 | orchestrator | 2025-05-28 19:26:34.256028 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-28 19:26:34.256032 | orchestrator | Wednesday 28 May 2025 19:25:50 +0000 (0:00:01.217) 0:12:58.492 ********* 2025-05-28 19:26:34.256036 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256040 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.256043 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.256047 | orchestrator | 2025-05-28 19:26:34.256051 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-28 19:26:34.256055 | orchestrator | Wednesday 28 May 2025 19:25:50 +0000 (0:00:00.609) 0:12:59.101 ********* 2025-05-28 19:26:34.256059 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256062 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.256066 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.256070 | orchestrator | 2025-05-28 19:26:34.256076 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-28 19:26:34.256080 | orchestrator | Wednesday 28 May 2025 19:25:51 +0000 (0:00:00.330) 0:12:59.432 ********* 2025-05-28 19:26:34.256083 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-28 19:26:34.256087 | orchestrator | 2025-05-28 19:26:34.256091 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-28 19:26:34.256095 | orchestrator | Wednesday 28 May 2025 19:25:51 +0000 (0:00:00.233) 0:12:59.666 ********* 2025-05-28 19:26:34.256102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256121 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256125 | orchestrator | 2025-05-28 19:26:34.256129 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-28 19:26:34.256133 | orchestrator | Wednesday 28 May 2025 19:25:52 +0000 (0:00:00.908) 0:13:00.575 ********* 2025-05-28 19:26:34.256136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256155 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256159 | orchestrator | 2025-05-28 19:26:34.256163 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-28 19:26:34.256167 | orchestrator | Wednesday 28 May 2025 19:25:53 +0000 (0:00:00.869) 0:13:01.445 ********* 2025-05-28 19:26:34.256171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-28 19:26:34.256190 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256193 | orchestrator | 2025-05-28 19:26:34.256197 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-28 19:26:34.256201 | orchestrator | Wednesday 28 May 2025 19:25:53 +0000 (0:00:00.623) 0:13:02.068 ********* 2025-05-28 19:26:34.256207 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 19:26:34.256211 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 19:26:34.256215 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 19:26:34.256221 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 19:26:34.256225 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-28 19:26:34.256229 | orchestrator | 2025-05-28 19:26:34.256232 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-28 19:26:34.256236 | orchestrator | Wednesday 28 May 2025 19:26:17 +0000 (0:00:23.277) 0:13:25.345 ********* 2025-05-28 19:26:34.256242 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256246 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.256250 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.256253 | orchestrator | 2025-05-28 19:26:34.256257 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-28 19:26:34.256261 | orchestrator | Wednesday 28 May 2025 19:26:17 +0000 (0:00:00.466) 0:13:25.812 ********* 2025-05-28 19:26:34.256265 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256269 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.256272 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.256276 | orchestrator | 2025-05-28 19:26:34.256280 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-28 19:26:34.256284 | orchestrator | Wednesday 28 May 2025 19:26:17 +0000 (0:00:00.358) 0:13:26.170 ********* 2025-05-28 19:26:34.256288 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.256292 | orchestrator | 2025-05-28 19:26:34.256295 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-28 19:26:34.256299 | orchestrator | Wednesday 28 May 2025 19:26:18 +0000 (0:00:00.559) 0:13:26.729 ********* 2025-05-28 19:26:34.256303 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.256307 | orchestrator | 2025-05-28 19:26:34.256311 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-28 19:26:34.256315 | orchestrator | Wednesday 28 May 2025 19:26:19 +0000 (0:00:00.787) 0:13:27.517 ********* 2025-05-28 19:26:34.256318 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.256322 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.256326 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.256330 | orchestrator | 2025-05-28 19:26:34.256333 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-28 19:26:34.256337 | orchestrator | Wednesday 28 May 2025 19:26:20 +0000 (0:00:01.178) 0:13:28.695 ********* 2025-05-28 19:26:34.256341 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.256345 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.256349 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.256352 | orchestrator | 2025-05-28 19:26:34.256356 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-28 19:26:34.256360 | orchestrator | Wednesday 28 May 2025 19:26:21 +0000 (0:00:01.168) 0:13:29.864 ********* 2025-05-28 19:26:34.256364 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.256368 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.256371 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.256375 | orchestrator | 2025-05-28 19:26:34.256379 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-28 19:26:34.256383 | orchestrator | Wednesday 28 May 2025 19:26:23 +0000 (0:00:02.054) 0:13:31.918 ********* 2025-05-28 19:26:34.256387 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.256391 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.256394 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-28 19:26:34.256400 | orchestrator | 2025-05-28 19:26:34.256404 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-28 19:26:34.256408 | orchestrator | Wednesday 28 May 2025 19:26:25 +0000 (0:00:01.990) 0:13:33.909 ********* 2025-05-28 19:26:34.256412 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256416 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:26:34.256419 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:26:34.256423 | orchestrator | 2025-05-28 19:26:34.256427 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-28 19:26:34.256431 | orchestrator | Wednesday 28 May 2025 19:26:26 +0000 (0:00:01.187) 0:13:35.097 ********* 2025-05-28 19:26:34.256435 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.256438 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.256442 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.256446 | orchestrator | 2025-05-28 19:26:34.256450 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-28 19:26:34.256454 | orchestrator | Wednesday 28 May 2025 19:26:27 +0000 (0:00:00.681) 0:13:35.779 ********* 2025-05-28 19:26:34.256459 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:26:34.256463 | orchestrator | 2025-05-28 19:26:34.256467 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-28 19:26:34.256471 | orchestrator | Wednesday 28 May 2025 19:26:28 +0000 (0:00:00.762) 0:13:36.541 ********* 2025-05-28 19:26:34.256475 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.256479 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.256482 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.256486 | orchestrator | 2025-05-28 19:26:34.256490 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-28 19:26:34.256494 | orchestrator | Wednesday 28 May 2025 19:26:28 +0000 (0:00:00.353) 0:13:36.894 ********* 2025-05-28 19:26:34.256498 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.256501 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.256505 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.256509 | orchestrator | 2025-05-28 19:26:34.256513 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-28 19:26:34.256517 | orchestrator | Wednesday 28 May 2025 19:26:29 +0000 (0:00:01.223) 0:13:38.118 ********* 2025-05-28 19:26:34.256520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:26:34.256524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:26:34.256528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:26:34.256534 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:26:34.256538 | orchestrator | 2025-05-28 19:26:34.256542 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-28 19:26:34.256545 | orchestrator | Wednesday 28 May 2025 19:26:30 +0000 (0:00:01.155) 0:13:39.274 ********* 2025-05-28 19:26:34.256549 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:26:34.256553 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:26:34.256557 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:26:34.256561 | orchestrator | 2025-05-28 19:26:34.256565 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-28 19:26:34.256568 | orchestrator | Wednesday 28 May 2025 19:26:31 +0000 (0:00:00.379) 0:13:39.653 ********* 2025-05-28 19:26:34.256572 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:26:34.256576 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:26:34.256580 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:26:34.256584 | orchestrator | 2025-05-28 19:26:34.256588 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:26:34.256591 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-28 19:26:34.256595 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-28 19:26:34.256602 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-28 19:26:34.256606 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-28 19:26:34.256610 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-28 19:26:34.256614 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-28 19:26:34.256617 | orchestrator | 2025-05-28 19:26:34.256621 | orchestrator | 2025-05-28 19:26:34.256625 | orchestrator | 2025-05-28 19:26:34.256629 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:26:34.256633 | orchestrator | Wednesday 28 May 2025 19:26:32 +0000 (0:00:01.366) 0:13:41.020 ********* 2025-05-28 19:26:34.256637 | orchestrator | =============================================================================== 2025-05-28 19:26:34.256640 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 46.69s 2025-05-28 19:26:34.256644 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 39.03s 2025-05-28 19:26:34.256648 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 23.28s 2025-05-28 19:26:34.256652 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.44s 2025-05-28 19:26:34.256656 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.07s 2025-05-28 19:26:34.256659 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.56s 2025-05-28 19:26:34.256663 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.64s 2025-05-28 19:26:34.256667 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.45s 2025-05-28 19:26:34.256671 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 8.01s 2025-05-28 19:26:34.256675 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.20s 2025-05-28 19:26:34.256678 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 7.02s 2025-05-28 19:26:34.256682 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.83s 2025-05-28 19:26:34.256686 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.02s 2025-05-28 19:26:34.256690 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.75s 2025-05-28 19:26:34.256694 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.57s 2025-05-28 19:26:34.256699 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.28s 2025-05-28 19:26:34.256703 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 4.24s 2025-05-28 19:26:34.256707 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 3.93s 2025-05-28 19:26:34.256711 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.43s 2025-05-28 19:26:34.256714 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 3.41s 2025-05-28 19:26:34.256718 | orchestrator | 2025-05-28 19:26:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:34.256722 | orchestrator | 2025-05-28 19:26:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:37.305839 | orchestrator | 2025-05-28 19:26:37 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:37.306855 | orchestrator | 2025-05-28 19:26:37 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:37.308630 | orchestrator | 2025-05-28 19:26:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:37.309057 | orchestrator | 2025-05-28 19:26:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:40.364052 | orchestrator | 2025-05-28 19:26:40 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:40.365201 | orchestrator | 2025-05-28 19:26:40 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:40.366141 | orchestrator | 2025-05-28 19:26:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:40.367129 | orchestrator | 2025-05-28 19:26:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:43.416793 | orchestrator | 2025-05-28 19:26:43 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:43.416892 | orchestrator | 2025-05-28 19:26:43 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:43.418985 | orchestrator | 2025-05-28 19:26:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:43.419013 | orchestrator | 2025-05-28 19:26:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:46.471129 | orchestrator | 2025-05-28 19:26:46 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:46.473691 | orchestrator | 2025-05-28 19:26:46 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:46.475181 | orchestrator | 2025-05-28 19:26:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:46.476081 | orchestrator | 2025-05-28 19:26:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:49.518309 | orchestrator | 2025-05-28 19:26:49 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:49.518416 | orchestrator | 2025-05-28 19:26:49 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:49.519701 | orchestrator | 2025-05-28 19:26:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:49.519731 | orchestrator | 2025-05-28 19:26:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:52.558657 | orchestrator | 2025-05-28 19:26:52 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:52.559501 | orchestrator | 2025-05-28 19:26:52 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:52.564913 | orchestrator | 2025-05-28 19:26:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:52.565001 | orchestrator | 2025-05-28 19:26:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:55.606556 | orchestrator | 2025-05-28 19:26:55 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:55.606882 | orchestrator | 2025-05-28 19:26:55 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:55.612599 | orchestrator | 2025-05-28 19:26:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:55.612634 | orchestrator | 2025-05-28 19:26:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:26:58.650542 | orchestrator | 2025-05-28 19:26:58 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:26:58.650676 | orchestrator | 2025-05-28 19:26:58 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state STARTED 2025-05-28 19:26:58.651657 | orchestrator | 2025-05-28 19:26:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:26:58.651720 | orchestrator | 2025-05-28 19:26:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:01.705814 | orchestrator | 2025-05-28 19:27:01 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:01.708065 | orchestrator | 2025-05-28 19:27:01 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:01.714081 | orchestrator | 2025-05-28 19:27:01 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:01.716552 | orchestrator | 2025-05-28 19:27:01 | INFO  | Task c2ab959d-d634-488a-82bb-d095c32474eb is in state SUCCESS 2025-05-28 19:27:01.718667 | orchestrator | 2025-05-28 19:27:01.718705 | orchestrator | 2025-05-28 19:27:01.718718 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-28 19:27:01.718731 | orchestrator | 2025-05-28 19:27:01.718743 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-28 19:27:01.718756 | orchestrator | Wednesday 28 May 2025 19:23:25 +0000 (0:00:00.184) 0:00:00.184 ********* 2025-05-28 19:27:01.718767 | orchestrator | ok: [localhost] => { 2025-05-28 19:27:01.718796 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-28 19:27:01.718809 | orchestrator | } 2025-05-28 19:27:01.718822 | orchestrator | 2025-05-28 19:27:01.718834 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-28 19:27:01.718845 | orchestrator | Wednesday 28 May 2025 19:23:25 +0000 (0:00:00.050) 0:00:00.235 ********* 2025-05-28 19:27:01.718857 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-28 19:27:01.718869 | orchestrator | ...ignoring 2025-05-28 19:27:01.718881 | orchestrator | 2025-05-28 19:27:01.718892 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-28 19:27:01.718903 | orchestrator | Wednesday 28 May 2025 19:23:27 +0000 (0:00:02.583) 0:00:02.818 ********* 2025-05-28 19:27:01.718914 | orchestrator | skipping: [localhost] 2025-05-28 19:27:01.718959 | orchestrator | 2025-05-28 19:27:01.718971 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-28 19:27:01.718983 | orchestrator | Wednesday 28 May 2025 19:23:28 +0000 (0:00:00.092) 0:00:02.910 ********* 2025-05-28 19:27:01.718994 | orchestrator | ok: [localhost] 2025-05-28 19:27:01.719005 | orchestrator | 2025-05-28 19:27:01.719016 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:27:01.719027 | orchestrator | 2025-05-28 19:27:01.719038 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:27:01.719049 | orchestrator | Wednesday 28 May 2025 19:23:28 +0000 (0:00:00.245) 0:00:03.156 ********* 2025-05-28 19:27:01.719060 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.719071 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.719082 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.719093 | orchestrator | 2025-05-28 19:27:01.719104 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:27:01.719115 | orchestrator | Wednesday 28 May 2025 19:23:28 +0000 (0:00:00.468) 0:00:03.624 ********* 2025-05-28 19:27:01.719126 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-28 19:27:01.719138 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-28 19:27:01.719149 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-28 19:27:01.719160 | orchestrator | 2025-05-28 19:27:01.719171 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-28 19:27:01.719181 | orchestrator | 2025-05-28 19:27:01.719192 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-28 19:27:01.719203 | orchestrator | Wednesday 28 May 2025 19:23:29 +0000 (0:00:00.445) 0:00:04.069 ********* 2025-05-28 19:27:01.719214 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:27:01.719252 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 19:27:01.719266 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 19:27:01.719279 | orchestrator | 2025-05-28 19:27:01.719291 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 19:27:01.719304 | orchestrator | Wednesday 28 May 2025 19:23:29 +0000 (0:00:00.729) 0:00:04.799 ********* 2025-05-28 19:27:01.719316 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:27:01.719329 | orchestrator | 2025-05-28 19:27:01.719341 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-28 19:27:01.719354 | orchestrator | Wednesday 28 May 2025 19:23:30 +0000 (0:00:00.882) 0:00:05.681 ********* 2025-05-28 19:27:01.719394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.719414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.719445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.719465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.719478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.719490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.719508 | orchestrator | 2025-05-28 19:27:01.719520 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-28 19:27:01.719531 | orchestrator | Wednesday 28 May 2025 19:23:35 +0000 (0:00:04.560) 0:00:10.242 ********* 2025-05-28 19:27:01.719542 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.719555 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.719566 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.719577 | orchestrator | 2025-05-28 19:27:01.719588 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-28 19:27:01.719599 | orchestrator | Wednesday 28 May 2025 19:23:36 +0000 (0:00:00.868) 0:00:11.110 ********* 2025-05-28 19:27:01.719610 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.719621 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.719632 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.719643 | orchestrator | 2025-05-28 19:27:01.719654 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-28 19:27:01.719665 | orchestrator | Wednesday 28 May 2025 19:23:37 +0000 (0:00:01.564) 0:00:12.675 ********* 2025-05-28 19:27:01.719685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.719704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.719725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.719750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.719764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.719789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.719800 | orchestrator | 2025-05-28 19:27:01.719812 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-28 19:27:01.719823 | orchestrator | Wednesday 28 May 2025 19:23:43 +0000 (0:00:05.394) 0:00:18.070 ********* 2025-05-28 19:27:01.719834 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.719845 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.719856 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.719867 | orchestrator | 2025-05-28 19:27:01.719878 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-28 19:27:01.719889 | orchestrator | Wednesday 28 May 2025 19:23:44 +0000 (0:00:01.155) 0:00:19.225 ********* 2025-05-28 19:27:01.719900 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:27:01.719911 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.719938 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:27:01.719950 | orchestrator | 2025-05-28 19:27:01.719961 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-28 19:27:01.719972 | orchestrator | Wednesday 28 May 2025 19:23:52 +0000 (0:00:08.131) 0:00:27.357 ********* 2025-05-28 19:27:01.719992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.720011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.720031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-28 19:27:01.720055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.720075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.720088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-28 19:27:01.720099 | orchestrator | 2025-05-28 19:27:01.720111 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-28 19:27:01.720122 | orchestrator | Wednesday 28 May 2025 19:23:57 +0000 (0:00:05.485) 0:00:32.842 ********* 2025-05-28 19:27:01.720133 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.720144 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:27:01.720155 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:27:01.720166 | orchestrator | 2025-05-28 19:27:01.720177 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-28 19:27:01.720188 | orchestrator | Wednesday 28 May 2025 19:23:59 +0000 (0:00:01.379) 0:00:34.222 ********* 2025-05-28 19:27:01.720199 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.720210 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.720221 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.720232 | orchestrator | 2025-05-28 19:27:01.720243 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-28 19:27:01.720253 | orchestrator | Wednesday 28 May 2025 19:23:59 +0000 (0:00:00.526) 0:00:34.748 ********* 2025-05-28 19:27:01.720264 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.720275 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.720286 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.720297 | orchestrator | 2025-05-28 19:27:01.720308 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-28 19:27:01.720319 | orchestrator | Wednesday 28 May 2025 19:24:00 +0000 (0:00:00.450) 0:00:35.199 ********* 2025-05-28 19:27:01.720331 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-28 19:27:01.720343 | orchestrator | ...ignoring 2025-05-28 19:27:01.720354 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-28 19:27:01.720365 | orchestrator | ...ignoring 2025-05-28 19:27:01.720376 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-28 19:27:01.720387 | orchestrator | ...ignoring 2025-05-28 19:27:01.720398 | orchestrator | 2025-05-28 19:27:01.720409 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-28 19:27:01.720420 | orchestrator | Wednesday 28 May 2025 19:24:11 +0000 (0:00:11.253) 0:00:46.452 ********* 2025-05-28 19:27:01.720431 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.720442 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.720453 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.720464 | orchestrator | 2025-05-28 19:27:01.720481 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-28 19:27:01.720492 | orchestrator | Wednesday 28 May 2025 19:24:12 +0000 (0:00:00.678) 0:00:47.131 ********* 2025-05-28 19:27:01.720503 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.720514 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.720525 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.720536 | orchestrator | 2025-05-28 19:27:01.720547 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-28 19:27:01.720558 | orchestrator | Wednesday 28 May 2025 19:24:12 +0000 (0:00:00.529) 0:00:47.661 ********* 2025-05-28 19:27:01.720569 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.720580 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.720591 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.720602 | orchestrator | 2025-05-28 19:27:01.720618 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-28 19:27:01.720630 | orchestrator | Wednesday 28 May 2025 19:24:13 +0000 (0:00:00.444) 0:00:48.105 ********* 2025-05-28 19:27:01.720642 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.720653 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.720664 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.720675 | orchestrator | 2025-05-28 19:27:01.720686 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-28 19:27:01.720701 | orchestrator | Wednesday 28 May 2025 19:24:13 +0000 (0:00:00.611) 0:00:48.717 ********* 2025-05-28 19:27:01.720713 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.720724 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.720735 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.720746 | orchestrator | 2025-05-28 19:27:01.720757 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-28 19:27:01.720768 | orchestrator | Wednesday 28 May 2025 19:24:14 +0000 (0:00:00.642) 0:00:49.359 ********* 2025-05-28 19:27:01.720778 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.720789 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.720800 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.720811 | orchestrator | 2025-05-28 19:27:01.720822 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 19:27:01.720833 | orchestrator | Wednesday 28 May 2025 19:24:15 +0000 (0:00:00.596) 0:00:49.956 ********* 2025-05-28 19:27:01.720844 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.720855 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.720866 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-28 19:27:01.720877 | orchestrator | 2025-05-28 19:27:01.720888 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-28 19:27:01.720899 | orchestrator | Wednesday 28 May 2025 19:24:15 +0000 (0:00:00.510) 0:00:50.466 ********* 2025-05-28 19:27:01.720910 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.720936 | orchestrator | 2025-05-28 19:27:01.720948 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-28 19:27:01.720959 | orchestrator | Wednesday 28 May 2025 19:24:26 +0000 (0:00:10.860) 0:01:01.326 ********* 2025-05-28 19:27:01.720970 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.720981 | orchestrator | 2025-05-28 19:27:01.720992 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-28 19:27:01.721003 | orchestrator | Wednesday 28 May 2025 19:24:26 +0000 (0:00:00.129) 0:01:01.456 ********* 2025-05-28 19:27:01.721014 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.721025 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.721036 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.721047 | orchestrator | 2025-05-28 19:27:01.721058 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-28 19:27:01.721069 | orchestrator | Wednesday 28 May 2025 19:24:27 +0000 (0:00:01.207) 0:01:02.664 ********* 2025-05-28 19:27:01.721080 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.721098 | orchestrator | 2025-05-28 19:27:01.721109 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-28 19:27:01.721120 | orchestrator | Wednesday 28 May 2025 19:24:38 +0000 (0:00:10.723) 0:01:13.387 ********* 2025-05-28 19:27:01.721131 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-28 19:27:01.721142 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.721153 | orchestrator | 2025-05-28 19:27:01.721164 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-28 19:27:01.721175 | orchestrator | Wednesday 28 May 2025 19:24:45 +0000 (0:00:07.200) 0:01:20.587 ********* 2025-05-28 19:27:01.721185 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.721196 | orchestrator | 2025-05-28 19:27:01.721207 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-28 19:27:01.721218 | orchestrator | Wednesday 28 May 2025 19:24:48 +0000 (0:00:02.735) 0:01:23.323 ********* 2025-05-28 19:27:01.721229 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.721240 | orchestrator | 2025-05-28 19:27:01.721251 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-28 19:27:01.721262 | orchestrator | Wednesday 28 May 2025 19:24:48 +0000 (0:00:00.127) 0:01:23.450 ********* 2025-05-28 19:27:01.721273 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.721284 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.721295 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.721306 | orchestrator | 2025-05-28 19:27:01.721317 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-28 19:27:01.721328 | orchestrator | Wednesday 28 May 2025 19:24:49 +0000 (0:00:00.459) 0:01:23.910 ********* 2025-05-28 19:27:01.721339 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.721350 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:27:01.721360 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:27:01.721371 | orchestrator | 2025-05-28 19:27:01.721382 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-28 19:27:01.721393 | orchestrator | Wednesday 28 May 2025 19:24:49 +0000 (0:00:00.460) 0:01:24.371 ********* 2025-05-28 19:27:01.721404 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-28 19:27:01.721415 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.721426 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:27:01.721437 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:27:01.721447 | orchestrator | 2025-05-28 19:27:01.721458 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-28 19:27:01.721469 | orchestrator | skipping: no hosts matched 2025-05-28 19:27:01.721480 | orchestrator | 2025-05-28 19:27:01.721491 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-28 19:27:01.721502 | orchestrator | 2025-05-28 19:27:01.721513 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-28 19:27:01.721524 | orchestrator | Wednesday 28 May 2025 19:25:09 +0000 (0:00:19.688) 0:01:44.059 ********* 2025-05-28 19:27:01.721535 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:27:01.721545 | orchestrator | 2025-05-28 19:27:01.721562 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-28 19:27:01.721574 | orchestrator | Wednesday 28 May 2025 19:25:24 +0000 (0:00:15.341) 0:01:59.400 ********* 2025-05-28 19:27:01.721585 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.721596 | orchestrator | 2025-05-28 19:27:01.721608 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-28 19:27:01.721619 | orchestrator | Wednesday 28 May 2025 19:25:45 +0000 (0:00:20.538) 0:02:19.939 ********* 2025-05-28 19:27:01.721630 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.721641 | orchestrator | 2025-05-28 19:27:01.721657 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-28 19:27:01.721669 | orchestrator | 2025-05-28 19:27:01.721680 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-28 19:27:01.721699 | orchestrator | Wednesday 28 May 2025 19:25:47 +0000 (0:00:02.405) 0:02:22.345 ********* 2025-05-28 19:27:01.721710 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:27:01.721721 | orchestrator | 2025-05-28 19:27:01.721732 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-28 19:27:01.721743 | orchestrator | Wednesday 28 May 2025 19:26:02 +0000 (0:00:15.486) 0:02:37.832 ********* 2025-05-28 19:27:01.721754 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.721765 | orchestrator | 2025-05-28 19:27:01.721776 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-28 19:27:01.721787 | orchestrator | Wednesday 28 May 2025 19:26:23 +0000 (0:00:20.547) 0:02:58.379 ********* 2025-05-28 19:27:01.721798 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.721810 | orchestrator | 2025-05-28 19:27:01.721821 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-28 19:27:01.721831 | orchestrator | 2025-05-28 19:27:01.721843 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-28 19:27:01.721854 | orchestrator | Wednesday 28 May 2025 19:26:25 +0000 (0:00:02.472) 0:03:00.852 ********* 2025-05-28 19:27:01.721865 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.721876 | orchestrator | 2025-05-28 19:27:01.721887 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-28 19:27:01.721898 | orchestrator | Wednesday 28 May 2025 19:26:38 +0000 (0:00:12.900) 0:03:13.752 ********* 2025-05-28 19:27:01.721909 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.721920 | orchestrator | 2025-05-28 19:27:01.721958 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-28 19:27:01.721970 | orchestrator | Wednesday 28 May 2025 19:26:43 +0000 (0:00:04.531) 0:03:18.284 ********* 2025-05-28 19:27:01.721981 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.721992 | orchestrator | 2025-05-28 19:27:01.722003 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-28 19:27:01.722014 | orchestrator | 2025-05-28 19:27:01.722072 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-28 19:27:01.722083 | orchestrator | Wednesday 28 May 2025 19:26:45 +0000 (0:00:02.482) 0:03:20.766 ********* 2025-05-28 19:27:01.722094 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:27:01.722106 | orchestrator | 2025-05-28 19:27:01.722117 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-28 19:27:01.722128 | orchestrator | Wednesday 28 May 2025 19:26:46 +0000 (0:00:00.722) 0:03:21.489 ********* 2025-05-28 19:27:01.722139 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.722150 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.722161 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.722172 | orchestrator | 2025-05-28 19:27:01.722184 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-28 19:27:01.722195 | orchestrator | Wednesday 28 May 2025 19:26:49 +0000 (0:00:02.519) 0:03:24.008 ********* 2025-05-28 19:27:01.722206 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.722217 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.722228 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.722239 | orchestrator | 2025-05-28 19:27:01.722250 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-28 19:27:01.722261 | orchestrator | Wednesday 28 May 2025 19:26:51 +0000 (0:00:02.110) 0:03:26.119 ********* 2025-05-28 19:27:01.722272 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.722284 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.722294 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.722306 | orchestrator | 2025-05-28 19:27:01.722317 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-28 19:27:01.722328 | orchestrator | Wednesday 28 May 2025 19:26:53 +0000 (0:00:02.236) 0:03:28.355 ********* 2025-05-28 19:27:01.722339 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.722350 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.722369 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:27:01.722380 | orchestrator | 2025-05-28 19:27:01.722391 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-28 19:27:01.722402 | orchestrator | Wednesday 28 May 2025 19:26:55 +0000 (0:00:02.084) 0:03:30.440 ********* 2025-05-28 19:27:01.722414 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:27:01.722425 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:27:01.722436 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:27:01.722447 | orchestrator | 2025-05-28 19:27:01.722458 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-28 19:27:01.722469 | orchestrator | Wednesday 28 May 2025 19:26:58 +0000 (0:00:03.395) 0:03:33.836 ********* 2025-05-28 19:27:01.722480 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:27:01.722491 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:27:01.722502 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:27:01.722513 | orchestrator | 2025-05-28 19:27:01.722524 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:27:01.722536 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-28 19:27:01.722547 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-28 19:27:01.722567 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-28 19:27:01.722579 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-28 19:27:01.722591 | orchestrator | 2025-05-28 19:27:01.722602 | orchestrator | 2025-05-28 19:27:01.722613 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:27:01.722630 | orchestrator | Wednesday 28 May 2025 19:26:59 +0000 (0:00:00.367) 0:03:34.204 ********* 2025-05-28 19:27:01.722641 | orchestrator | =============================================================================== 2025-05-28 19:27:01.722652 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.09s 2025-05-28 19:27:01.722663 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 30.83s 2025-05-28 19:27:01.722674 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 19.69s 2025-05-28 19:27:01.722685 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.90s 2025-05-28 19:27:01.722696 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.25s 2025-05-28 19:27:01.722707 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.86s 2025-05-28 19:27:01.722718 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.72s 2025-05-28 19:27:01.722729 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 8.13s 2025-05-28 19:27:01.722740 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.20s 2025-05-28 19:27:01.722751 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 5.49s 2025-05-28 19:27:01.722762 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.39s 2025-05-28 19:27:01.722773 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.88s 2025-05-28 19:27:01.722784 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.56s 2025-05-28 19:27:01.722795 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.53s 2025-05-28 19:27:01.722806 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.40s 2025-05-28 19:27:01.722817 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.74s 2025-05-28 19:27:01.722828 | orchestrator | Check MariaDB service --------------------------------------------------- 2.58s 2025-05-28 19:27:01.722845 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.52s 2025-05-28 19:27:01.722856 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.48s 2025-05-28 19:27:01.722867 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.24s 2025-05-28 19:27:01.722878 | orchestrator | 2025-05-28 19:27:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:01.722890 | orchestrator | 2025-05-28 19:27:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:04.753247 | orchestrator | 2025-05-28 19:27:04 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:04.753509 | orchestrator | 2025-05-28 19:27:04 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:04.756726 | orchestrator | 2025-05-28 19:27:04 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:04.759318 | orchestrator | 2025-05-28 19:27:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:04.759342 | orchestrator | 2025-05-28 19:27:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:07.796789 | orchestrator | 2025-05-28 19:27:07 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:07.798708 | orchestrator | 2025-05-28 19:27:07 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:07.799256 | orchestrator | 2025-05-28 19:27:07 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:07.800324 | orchestrator | 2025-05-28 19:27:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:07.800346 | orchestrator | 2025-05-28 19:27:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:10.844717 | orchestrator | 2025-05-28 19:27:10 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:10.844809 | orchestrator | 2025-05-28 19:27:10 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:10.847591 | orchestrator | 2025-05-28 19:27:10 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:10.850735 | orchestrator | 2025-05-28 19:27:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:10.850774 | orchestrator | 2025-05-28 19:27:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:13.889554 | orchestrator | 2025-05-28 19:27:13 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:13.889658 | orchestrator | 2025-05-28 19:27:13 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:13.890314 | orchestrator | 2025-05-28 19:27:13 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:13.891134 | orchestrator | 2025-05-28 19:27:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:13.891160 | orchestrator | 2025-05-28 19:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:16.930532 | orchestrator | 2025-05-28 19:27:16 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:16.932050 | orchestrator | 2025-05-28 19:27:16 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:16.934624 | orchestrator | 2025-05-28 19:27:16 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:16.936432 | orchestrator | 2025-05-28 19:27:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:16.936506 | orchestrator | 2025-05-28 19:27:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:19.979294 | orchestrator | 2025-05-28 19:27:19 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:19.979405 | orchestrator | 2025-05-28 19:27:19 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:19.982299 | orchestrator | 2025-05-28 19:27:19 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:19.982774 | orchestrator | 2025-05-28 19:27:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:19.982803 | orchestrator | 2025-05-28 19:27:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:23.032496 | orchestrator | 2025-05-28 19:27:23 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:23.032848 | orchestrator | 2025-05-28 19:27:23 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:23.035416 | orchestrator | 2025-05-28 19:27:23 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:23.038340 | orchestrator | 2025-05-28 19:27:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:23.038384 | orchestrator | 2025-05-28 19:27:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:26.082599 | orchestrator | 2025-05-28 19:27:26 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:26.082854 | orchestrator | 2025-05-28 19:27:26 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:26.083533 | orchestrator | 2025-05-28 19:27:26 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:26.084485 | orchestrator | 2025-05-28 19:27:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:26.084585 | orchestrator | 2025-05-28 19:27:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:29.128357 | orchestrator | 2025-05-28 19:27:29 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:29.128779 | orchestrator | 2025-05-28 19:27:29 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:29.132138 | orchestrator | 2025-05-28 19:27:29 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:29.132173 | orchestrator | 2025-05-28 19:27:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:29.132753 | orchestrator | 2025-05-28 19:27:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:32.180671 | orchestrator | 2025-05-28 19:27:32 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:32.180856 | orchestrator | 2025-05-28 19:27:32 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:32.181807 | orchestrator | 2025-05-28 19:27:32 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:32.183285 | orchestrator | 2025-05-28 19:27:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:32.183309 | orchestrator | 2025-05-28 19:27:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:35.229139 | orchestrator | 2025-05-28 19:27:35 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:35.229621 | orchestrator | 2025-05-28 19:27:35 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:35.230504 | orchestrator | 2025-05-28 19:27:35 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:35.231476 | orchestrator | 2025-05-28 19:27:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:35.231507 | orchestrator | 2025-05-28 19:27:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:38.269646 | orchestrator | 2025-05-28 19:27:38 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:38.269848 | orchestrator | 2025-05-28 19:27:38 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:38.269934 | orchestrator | 2025-05-28 19:27:38 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:38.270374 | orchestrator | 2025-05-28 19:27:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:38.270401 | orchestrator | 2025-05-28 19:27:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:41.317230 | orchestrator | 2025-05-28 19:27:41 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:41.319127 | orchestrator | 2025-05-28 19:27:41 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:41.320559 | orchestrator | 2025-05-28 19:27:41 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:41.322580 | orchestrator | 2025-05-28 19:27:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:41.322607 | orchestrator | 2025-05-28 19:27:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:44.364114 | orchestrator | 2025-05-28 19:27:44 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:44.364214 | orchestrator | 2025-05-28 19:27:44 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:44.366139 | orchestrator | 2025-05-28 19:27:44 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:44.367143 | orchestrator | 2025-05-28 19:27:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:44.367165 | orchestrator | 2025-05-28 19:27:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:47.419483 | orchestrator | 2025-05-28 19:27:47 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:47.420961 | orchestrator | 2025-05-28 19:27:47 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:47.424652 | orchestrator | 2025-05-28 19:27:47 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:47.427256 | orchestrator | 2025-05-28 19:27:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:47.427354 | orchestrator | 2025-05-28 19:27:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:50.485566 | orchestrator | 2025-05-28 19:27:50 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:50.487754 | orchestrator | 2025-05-28 19:27:50 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:50.490732 | orchestrator | 2025-05-28 19:27:50 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:50.492073 | orchestrator | 2025-05-28 19:27:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:50.492115 | orchestrator | 2025-05-28 19:27:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:53.537463 | orchestrator | 2025-05-28 19:27:53 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:53.539057 | orchestrator | 2025-05-28 19:27:53 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:53.541651 | orchestrator | 2025-05-28 19:27:53 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:53.544180 | orchestrator | 2025-05-28 19:27:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:53.544233 | orchestrator | 2025-05-28 19:27:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:56.591238 | orchestrator | 2025-05-28 19:27:56 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:56.592642 | orchestrator | 2025-05-28 19:27:56 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:56.594941 | orchestrator | 2025-05-28 19:27:56 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:56.596521 | orchestrator | 2025-05-28 19:27:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:56.596579 | orchestrator | 2025-05-28 19:27:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:27:59.640566 | orchestrator | 2025-05-28 19:27:59 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:27:59.641967 | orchestrator | 2025-05-28 19:27:59 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:27:59.642131 | orchestrator | 2025-05-28 19:27:59 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:27:59.642674 | orchestrator | 2025-05-28 19:27:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:27:59.642704 | orchestrator | 2025-05-28 19:27:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:02.686715 | orchestrator | 2025-05-28 19:28:02 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:02.687512 | orchestrator | 2025-05-28 19:28:02 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:02.688655 | orchestrator | 2025-05-28 19:28:02 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:02.689677 | orchestrator | 2025-05-28 19:28:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:02.689703 | orchestrator | 2025-05-28 19:28:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:05.735054 | orchestrator | 2025-05-28 19:28:05 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:05.735999 | orchestrator | 2025-05-28 19:28:05 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:05.736699 | orchestrator | 2025-05-28 19:28:05 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:05.737524 | orchestrator | 2025-05-28 19:28:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:05.737559 | orchestrator | 2025-05-28 19:28:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:08.785373 | orchestrator | 2025-05-28 19:28:08 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:08.788121 | orchestrator | 2025-05-28 19:28:08 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:08.789481 | orchestrator | 2025-05-28 19:28:08 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:08.790986 | orchestrator | 2025-05-28 19:28:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:08.791039 | orchestrator | 2025-05-28 19:28:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:11.834325 | orchestrator | 2025-05-28 19:28:11 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:11.835508 | orchestrator | 2025-05-28 19:28:11 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:11.836566 | orchestrator | 2025-05-28 19:28:11 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:11.837970 | orchestrator | 2025-05-28 19:28:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:11.837998 | orchestrator | 2025-05-28 19:28:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:14.888623 | orchestrator | 2025-05-28 19:28:14 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:14.890286 | orchestrator | 2025-05-28 19:28:14 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:14.891453 | orchestrator | 2025-05-28 19:28:14 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:14.893115 | orchestrator | 2025-05-28 19:28:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:14.893138 | orchestrator | 2025-05-28 19:28:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:17.940471 | orchestrator | 2025-05-28 19:28:17 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:17.942215 | orchestrator | 2025-05-28 19:28:17 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:17.944225 | orchestrator | 2025-05-28 19:28:17 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:17.947071 | orchestrator | 2025-05-28 19:28:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:17.947112 | orchestrator | 2025-05-28 19:28:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:20.996570 | orchestrator | 2025-05-28 19:28:20 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:20.998111 | orchestrator | 2025-05-28 19:28:20 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:21.002191 | orchestrator | 2025-05-28 19:28:21 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:21.003668 | orchestrator | 2025-05-28 19:28:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:21.003956 | orchestrator | 2025-05-28 19:28:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:24.050784 | orchestrator | 2025-05-28 19:28:24 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:24.052977 | orchestrator | 2025-05-28 19:28:24 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:24.054158 | orchestrator | 2025-05-28 19:28:24 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:24.055875 | orchestrator | 2025-05-28 19:28:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:24.055913 | orchestrator | 2025-05-28 19:28:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:27.109823 | orchestrator | 2025-05-28 19:28:27 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:27.110607 | orchestrator | 2025-05-28 19:28:27 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:27.111852 | orchestrator | 2025-05-28 19:28:27 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:27.112820 | orchestrator | 2025-05-28 19:28:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:27.113224 | orchestrator | 2025-05-28 19:28:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:30.162497 | orchestrator | 2025-05-28 19:28:30 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:30.163317 | orchestrator | 2025-05-28 19:28:30 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:30.164828 | orchestrator | 2025-05-28 19:28:30 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:30.166598 | orchestrator | 2025-05-28 19:28:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:30.166623 | orchestrator | 2025-05-28 19:28:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:33.215481 | orchestrator | 2025-05-28 19:28:33 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:33.217104 | orchestrator | 2025-05-28 19:28:33 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:33.218777 | orchestrator | 2025-05-28 19:28:33 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:33.221168 | orchestrator | 2025-05-28 19:28:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:33.221204 | orchestrator | 2025-05-28 19:28:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:36.281193 | orchestrator | 2025-05-28 19:28:36 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:36.281651 | orchestrator | 2025-05-28 19:28:36 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:36.284089 | orchestrator | 2025-05-28 19:28:36 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:36.285769 | orchestrator | 2025-05-28 19:28:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:36.285841 | orchestrator | 2025-05-28 19:28:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:39.350768 | orchestrator | 2025-05-28 19:28:39 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:39.351530 | orchestrator | 2025-05-28 19:28:39 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:39.354177 | orchestrator | 2025-05-28 19:28:39 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:39.355415 | orchestrator | 2025-05-28 19:28:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:39.355449 | orchestrator | 2025-05-28 19:28:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:42.410311 | orchestrator | 2025-05-28 19:28:42 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:42.413309 | orchestrator | 2025-05-28 19:28:42 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state STARTED 2025-05-28 19:28:42.414753 | orchestrator | 2025-05-28 19:28:42 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state STARTED 2025-05-28 19:28:42.418187 | orchestrator | 2025-05-28 19:28:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:42.418218 | orchestrator | 2025-05-28 19:28:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:45.475439 | orchestrator | 2025-05-28 19:28:45 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:45.477493 | orchestrator | 2025-05-28 19:28:45 | INFO  | Task d8a9a439-a466-4373-85bb-1242d6e4b43e is in state SUCCESS 2025-05-28 19:28:45.479111 | orchestrator | 2025-05-28 19:28:45.479158 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-28 19:28:45.479171 | orchestrator | 2025-05-28 19:28:45.479183 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-28 19:28:45.479194 | orchestrator | 2025-05-28 19:28:45.479205 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-28 19:28:45.479217 | orchestrator | Wednesday 28 May 2025 19:26:37 +0000 (0:00:01.084) 0:00:01.084 ********* 2025-05-28 19:28:45.479228 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:28:45.479241 | orchestrator | 2025-05-28 19:28:45.479253 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-28 19:28:45.479264 | orchestrator | Wednesday 28 May 2025 19:26:38 +0000 (0:00:00.523) 0:00:01.607 ********* 2025-05-28 19:28:45.479275 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-28 19:28:45.479287 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-28 19:28:45.479297 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-28 19:28:45.479309 | orchestrator | 2025-05-28 19:28:45.479320 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-28 19:28:45.479413 | orchestrator | Wednesday 28 May 2025 19:26:38 +0000 (0:00:00.801) 0:00:02.408 ********* 2025-05-28 19:28:45.479434 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:28:45.479446 | orchestrator | 2025-05-28 19:28:45.479456 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-28 19:28:45.479468 | orchestrator | Wednesday 28 May 2025 19:26:39 +0000 (0:00:00.690) 0:00:03.098 ********* 2025-05-28 19:28:45.479479 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.479573 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.479588 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.479599 | orchestrator | 2025-05-28 19:28:45.479610 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-28 19:28:45.479621 | orchestrator | Wednesday 28 May 2025 19:26:40 +0000 (0:00:00.607) 0:00:03.706 ********* 2025-05-28 19:28:45.479632 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.479643 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.479654 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.479665 | orchestrator | 2025-05-28 19:28:45.479676 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-28 19:28:45.479687 | orchestrator | Wednesday 28 May 2025 19:26:40 +0000 (0:00:00.291) 0:00:03.997 ********* 2025-05-28 19:28:45.479698 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.479710 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.479722 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.479734 | orchestrator | 2025-05-28 19:28:45.479747 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-28 19:28:45.479759 | orchestrator | Wednesday 28 May 2025 19:26:41 +0000 (0:00:00.759) 0:00:04.757 ********* 2025-05-28 19:28:45.479792 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.479804 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.479816 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.479829 | orchestrator | 2025-05-28 19:28:45.479842 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-28 19:28:45.479855 | orchestrator | Wednesday 28 May 2025 19:26:41 +0000 (0:00:00.308) 0:00:05.065 ********* 2025-05-28 19:28:45.479867 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.479879 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.479892 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.479904 | orchestrator | 2025-05-28 19:28:45.479917 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-28 19:28:45.479942 | orchestrator | Wednesday 28 May 2025 19:26:41 +0000 (0:00:00.296) 0:00:05.362 ********* 2025-05-28 19:28:45.479954 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.479965 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.479975 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.479986 | orchestrator | 2025-05-28 19:28:45.479997 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-28 19:28:45.480009 | orchestrator | Wednesday 28 May 2025 19:26:42 +0000 (0:00:00.307) 0:00:05.669 ********* 2025-05-28 19:28:45.480020 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.480032 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.480043 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.480054 | orchestrator | 2025-05-28 19:28:45.480065 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-28 19:28:45.480076 | orchestrator | Wednesday 28 May 2025 19:26:42 +0000 (0:00:00.466) 0:00:06.136 ********* 2025-05-28 19:28:45.480087 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.480098 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.480109 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.480120 | orchestrator | 2025-05-28 19:28:45.480131 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-28 19:28:45.480143 | orchestrator | Wednesday 28 May 2025 19:26:42 +0000 (0:00:00.279) 0:00:06.415 ********* 2025-05-28 19:28:45.480159 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 19:28:45.480170 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:28:45.480181 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:28:45.480192 | orchestrator | 2025-05-28 19:28:45.480203 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-28 19:28:45.480214 | orchestrator | Wednesday 28 May 2025 19:26:43 +0000 (0:00:00.666) 0:00:07.082 ********* 2025-05-28 19:28:45.480225 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.480236 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.480247 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.480258 | orchestrator | 2025-05-28 19:28:45.480269 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-28 19:28:45.480280 | orchestrator | Wednesday 28 May 2025 19:26:44 +0000 (0:00:00.432) 0:00:07.514 ********* 2025-05-28 19:28:45.480306 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 19:28:45.480318 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:28:45.480329 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:28:45.480340 | orchestrator | 2025-05-28 19:28:45.480352 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-28 19:28:45.480363 | orchestrator | Wednesday 28 May 2025 19:26:46 +0000 (0:00:02.267) 0:00:09.781 ********* 2025-05-28 19:28:45.480374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:28:45.480385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:28:45.480396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:28:45.480407 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.480419 | orchestrator | 2025-05-28 19:28:45.480430 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-28 19:28:45.480441 | orchestrator | Wednesday 28 May 2025 19:26:46 +0000 (0:00:00.409) 0:00:10.191 ********* 2025-05-28 19:28:45.480453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-28 19:28:45.480467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-28 19:28:45.480486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-28 19:28:45.480498 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.480509 | orchestrator | 2025-05-28 19:28:45.480520 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-28 19:28:45.480531 | orchestrator | Wednesday 28 May 2025 19:26:47 +0000 (0:00:00.634) 0:00:10.826 ********* 2025-05-28 19:28:45.480544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:28:45.480557 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:28:45.480569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:28:45.480580 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.480592 | orchestrator | 2025-05-28 19:28:45.480603 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-28 19:28:45.480614 | orchestrator | Wednesday 28 May 2025 19:26:47 +0000 (0:00:00.168) 0:00:10.995 ********* 2025-05-28 19:28:45.480632 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '276687da0d8a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-28 19:26:44.860825', 'end': '2025-05-28 19:26:44.915847', 'delta': '0:00:00.055022', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['276687da0d8a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-28 19:28:45.480655 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'c477ebe64635', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-28 19:26:45.407741', 'end': '2025-05-28 19:26:45.454002', 'delta': '0:00:00.046261', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c477ebe64635'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-28 19:28:45.480668 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '60398fb80f6e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-28 19:26:45.967239', 'end': '2025-05-28 19:26:46.013605', 'delta': '0:00:00.046366', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['60398fb80f6e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-28 19:28:45.480686 | orchestrator | 2025-05-28 19:28:45.480698 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-28 19:28:45.480709 | orchestrator | Wednesday 28 May 2025 19:26:47 +0000 (0:00:00.198) 0:00:11.193 ********* 2025-05-28 19:28:45.480720 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.480731 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.480742 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.480753 | orchestrator | 2025-05-28 19:28:45.480779 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-28 19:28:45.480792 | orchestrator | Wednesday 28 May 2025 19:26:48 +0000 (0:00:00.423) 0:00:11.617 ********* 2025-05-28 19:28:45.480803 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-28 19:28:45.480814 | orchestrator | 2025-05-28 19:28:45.480825 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-28 19:28:45.480836 | orchestrator | Wednesday 28 May 2025 19:26:49 +0000 (0:00:01.357) 0:00:12.974 ********* 2025-05-28 19:28:45.480847 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.480858 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.480870 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.480881 | orchestrator | 2025-05-28 19:28:45.480892 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-28 19:28:45.480903 | orchestrator | Wednesday 28 May 2025 19:26:49 +0000 (0:00:00.440) 0:00:13.415 ********* 2025-05-28 19:28:45.480914 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.480925 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.480936 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.480947 | orchestrator | 2025-05-28 19:28:45.480958 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-28 19:28:45.480969 | orchestrator | Wednesday 28 May 2025 19:26:50 +0000 (0:00:00.411) 0:00:13.827 ********* 2025-05-28 19:28:45.480980 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.480991 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481002 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481013 | orchestrator | 2025-05-28 19:28:45.481025 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-28 19:28:45.481035 | orchestrator | Wednesday 28 May 2025 19:26:50 +0000 (0:00:00.294) 0:00:14.121 ********* 2025-05-28 19:28:45.481047 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.481058 | orchestrator | 2025-05-28 19:28:45.481069 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-28 19:28:45.481080 | orchestrator | Wednesday 28 May 2025 19:26:50 +0000 (0:00:00.117) 0:00:14.238 ********* 2025-05-28 19:28:45.481091 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481102 | orchestrator | 2025-05-28 19:28:45.481113 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-28 19:28:45.481125 | orchestrator | Wednesday 28 May 2025 19:26:50 +0000 (0:00:00.205) 0:00:14.444 ********* 2025-05-28 19:28:45.481136 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481147 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481158 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481169 | orchestrator | 2025-05-28 19:28:45.481180 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-28 19:28:45.481195 | orchestrator | Wednesday 28 May 2025 19:26:51 +0000 (0:00:00.471) 0:00:14.916 ********* 2025-05-28 19:28:45.481207 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481225 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481236 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481247 | orchestrator | 2025-05-28 19:28:45.481258 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-28 19:28:45.481270 | orchestrator | Wednesday 28 May 2025 19:26:51 +0000 (0:00:00.313) 0:00:15.229 ********* 2025-05-28 19:28:45.481281 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481292 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481303 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481314 | orchestrator | 2025-05-28 19:28:45.481325 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-28 19:28:45.481336 | orchestrator | Wednesday 28 May 2025 19:26:52 +0000 (0:00:00.310) 0:00:15.540 ********* 2025-05-28 19:28:45.481347 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481359 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481375 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481387 | orchestrator | 2025-05-28 19:28:45.481399 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-28 19:28:45.481410 | orchestrator | Wednesday 28 May 2025 19:26:52 +0000 (0:00:00.316) 0:00:15.856 ********* 2025-05-28 19:28:45.481421 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481432 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481443 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481455 | orchestrator | 2025-05-28 19:28:45.481466 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-28 19:28:45.481477 | orchestrator | Wednesday 28 May 2025 19:26:52 +0000 (0:00:00.477) 0:00:16.334 ********* 2025-05-28 19:28:45.481488 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481499 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481511 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481522 | orchestrator | 2025-05-28 19:28:45.481533 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-28 19:28:45.481544 | orchestrator | Wednesday 28 May 2025 19:26:53 +0000 (0:00:00.329) 0:00:16.664 ********* 2025-05-28 19:28:45.481555 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.481567 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.481578 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.481589 | orchestrator | 2025-05-28 19:28:45.481600 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-28 19:28:45.481611 | orchestrator | Wednesday 28 May 2025 19:26:53 +0000 (0:00:00.319) 0:00:16.984 ********* 2025-05-28 19:28:45.481623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79c077cd--dd98--5cad--a8fa--86d8aa897eb3-osd--block--79c077cd--dd98--5cad--a8fa--86d8aa897eb3', 'dm-uuid-LVM-UBrlOBB861jVBN2oFE7flYtnx14OEDwnXwBJxe52At9drgNHuOzs8cxgMCMljOpr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--117a45ef--4e6c--5b76--bea4--f0c196d92690-osd--block--117a45ef--4e6c--5b76--bea4--f0c196d92690', 'dm-uuid-LVM-PnniarbhwZR82CJRkBx0Ja60r5xicpTcoxdkJGMjhdGMdoe25FLGJsA3G7WeFE7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3ed7399e--dc97--5c28--9f68--879666a39403-osd--block--3ed7399e--dc97--5c28--9f68--879666a39403', 'dm-uuid-LVM-5Ln76UVwsZb24Ce9e2cHJzEQ4hrh0bAq3kxwIGAllXNeqToFd7SL1rej2NELmu9n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0344b063--3cec--5ade--bfbf--9241287811af-osd--block--0344b063--3cec--5ade--bfbf--9241287811af', 'dm-uuid-LVM-6TANzYwr6bZprSERlwam74GDjlqkdRykRYD1B1Fjvyn41kwk7th3THqF0s7lywXA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part1', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part14', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part15', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part16', 'scsi-SQEMU_QEMU_HARDDISK_3f33ab16-b639-440d-ac1b-a4a99753b81e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.481903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--79c077cd--dd98--5cad--a8fa--86d8aa897eb3-osd--block--79c077cd--dd98--5cad--a8fa--86d8aa897eb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yQ0K7d-FFRQ-fZ3L-gEwh-Nuf2-uOxH-WZESvb', 'scsi-0QEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801', 'scsi-SQEMU_QEMU_HARDDISK_49a2ee15-28bf-4b5f-b85e-3182eb91d801'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.481939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--117a45ef--4e6c--5b76--bea4--f0c196d92690-osd--block--117a45ef--4e6c--5b76--bea4--f0c196d92690'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3cLG1x-g7xV-colf-9NHq-anwz-LgS6-fAz0xA', 'scsi-0QEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4', 'scsi-SQEMU_QEMU_HARDDISK_1334c062-0c98-48ca-b2e9-c7f7d80524d4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.481963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.481975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1', 'scsi-SQEMU_QEMU_HARDDISK_384074e9-09a1-4592-86bd-93fc7dbc72b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.481998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fcdec5e-7c62-42ab-b54c-67d461c9b6b3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482090 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.482102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3ed7399e--dc97--5c28--9f68--879666a39403-osd--block--3ed7399e--dc97--5c28--9f68--879666a39403'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aRJ4k3-5LQf-h667-kmFR-wyn4-FzEt-NeG8j5', 'scsi-0QEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836', 'scsi-SQEMU_QEMU_HARDDISK_0c0aa11d-14fc-40a7-bbcb-a7c7d902b836'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0344b063--3cec--5ade--bfbf--9241287811af-osd--block--0344b063--3cec--5ade--bfbf--9241287811af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hRFZlh-qctn-bnB0-9ZhO-JNBh-muQM-00Vczq', 'scsi-0QEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445', 'scsi-SQEMU_QEMU_HARDDISK_6fe61b53-6367-46c0-9f1e-24f42cf64445'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd', 'scsi-SQEMU_QEMU_HARDDISK_3485bbb9-dc34-4923-9640-15ed9830c3cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5db078c0--6128--52c2--9305--54ff671eda75-osd--block--5db078c0--6128--52c2--9305--54ff671eda75', 'dm-uuid-LVM-RYQZnbBGY0TjyDJuc1CjDrS2jsaKjqQ2ZbxT5CuUvGXG4GryREvNmdF1Q0N8AJTE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fda1a2ce--c0e6--5c69--aaa5--109883ddc076-osd--block--fda1a2ce--c0e6--5c69--aaa5--109883ddc076', 'dm-uuid-LVM-0TgdcD8Enf5FOTXaY0BoayYOBZYs3eXfEyLrrYuvDOobtP3Ih6O52eEwd9C6PIMF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482627 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.482642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:28:45.482836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e8a159a-a40f-415e-ab81-88c2679d1e87-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5db078c0--6128--52c2--9305--54ff671eda75-osd--block--5db078c0--6128--52c2--9305--54ff671eda75'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1wqmIn-xPVd-MNFA-i9g8-7vPd-fqn2-5J6j0X', 'scsi-0QEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1', 'scsi-SQEMU_QEMU_HARDDISK_1e78336b-5c45-4f72-b22f-cac6621703c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fda1a2ce--c0e6--5c69--aaa5--109883ddc076-osd--block--fda1a2ce--c0e6--5c69--aaa5--109883ddc076'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bmLQ6T-MA5V-AVU9-lelo-200Y-U4YJ-1BfG3W', 'scsi-0QEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d', 'scsi-SQEMU_QEMU_HARDDISK_669b4378-b931-4094-a90b-e4d774be1d1d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d', 'scsi-SQEMU_QEMU_HARDDISK_30074f97-ca08-4933-8c1f-7f138584444d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:28:45.482922 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.482934 | orchestrator | 2025-05-28 19:28:45.482946 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-28 19:28:45.482957 | orchestrator | Wednesday 28 May 2025 19:26:54 +0000 (0:00:00.593) 0:00:17.577 ********* 2025-05-28 19:28:45.482974 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-28 19:28:45.482985 | orchestrator | 2025-05-28 19:28:45.482996 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-28 19:28:45.483007 | orchestrator | Wednesday 28 May 2025 19:26:55 +0000 (0:00:01.358) 0:00:18.936 ********* 2025-05-28 19:28:45.483019 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.483030 | orchestrator | 2025-05-28 19:28:45.483040 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-28 19:28:45.483052 | orchestrator | Wednesday 28 May 2025 19:26:55 +0000 (0:00:00.152) 0:00:19.089 ********* 2025-05-28 19:28:45.483064 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.483077 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.483089 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.483101 | orchestrator | 2025-05-28 19:28:45.483114 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-28 19:28:45.483126 | orchestrator | Wednesday 28 May 2025 19:26:55 +0000 (0:00:00.362) 0:00:19.451 ********* 2025-05-28 19:28:45.483138 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.483151 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.483163 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.483176 | orchestrator | 2025-05-28 19:28:45.483188 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-28 19:28:45.483201 | orchestrator | Wednesday 28 May 2025 19:26:56 +0000 (0:00:00.678) 0:00:20.130 ********* 2025-05-28 19:28:45.483214 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.483226 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.483238 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.483251 | orchestrator | 2025-05-28 19:28:45.483263 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-28 19:28:45.483284 | orchestrator | Wednesday 28 May 2025 19:26:56 +0000 (0:00:00.285) 0:00:20.416 ********* 2025-05-28 19:28:45.483296 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.483309 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.483321 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.483334 | orchestrator | 2025-05-28 19:28:45.483346 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-28 19:28:45.483366 | orchestrator | Wednesday 28 May 2025 19:26:57 +0000 (0:00:00.898) 0:00:21.314 ********* 2025-05-28 19:28:45.483379 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.483392 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.483404 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.483415 | orchestrator | 2025-05-28 19:28:45.483426 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-28 19:28:45.483437 | orchestrator | Wednesday 28 May 2025 19:26:58 +0000 (0:00:00.323) 0:00:21.638 ********* 2025-05-28 19:28:45.483463 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.483482 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.483501 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.483518 | orchestrator | 2025-05-28 19:28:45.483536 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-28 19:28:45.483563 | orchestrator | Wednesday 28 May 2025 19:26:58 +0000 (0:00:00.436) 0:00:22.074 ********* 2025-05-28 19:28:45.483587 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.483605 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.483625 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.483645 | orchestrator | 2025-05-28 19:28:45.483663 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-28 19:28:45.483676 | orchestrator | Wednesday 28 May 2025 19:26:58 +0000 (0:00:00.335) 0:00:22.410 ********* 2025-05-28 19:28:45.483694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:28:45.483706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:28:45.483717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:28:45.483728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:28:45.483748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:28:45.483759 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.483795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:28:45.483807 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:28:45.483818 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.483829 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:28:45.483840 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:28:45.483851 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.483862 | orchestrator | 2025-05-28 19:28:45.483873 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-28 19:28:45.483902 | orchestrator | Wednesday 28 May 2025 19:26:59 +0000 (0:00:00.937) 0:00:23.347 ********* 2025-05-28 19:28:45.483914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:28:45.483926 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:28:45.483936 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:28:45.483947 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:28:45.483959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:28:45.483969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:28:45.483980 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:28:45.483991 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.484002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:28:45.484013 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.484024 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:28:45.484035 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.484045 | orchestrator | 2025-05-28 19:28:45.484056 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-28 19:28:45.484067 | orchestrator | Wednesday 28 May 2025 19:27:00 +0000 (0:00:00.838) 0:00:24.185 ********* 2025-05-28 19:28:45.484078 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-28 19:28:45.484089 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-28 19:28:45.484100 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-28 19:28:45.484111 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-28 19:28:45.484121 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-28 19:28:45.484132 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-28 19:28:45.484143 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-28 19:28:45.484154 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-28 19:28:45.484165 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-28 19:28:45.484176 | orchestrator | 2025-05-28 19:28:45.484187 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-28 19:28:45.484198 | orchestrator | Wednesday 28 May 2025 19:27:02 +0000 (0:00:02.166) 0:00:26.352 ********* 2025-05-28 19:28:45.484208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:28:45.484219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:28:45.484230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:28:45.484241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:28:45.484252 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:28:45.484263 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:28:45.484274 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.484285 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.484296 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:28:45.484315 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:28:45.484326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:28:45.484337 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.484348 | orchestrator | 2025-05-28 19:28:45.484359 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-28 19:28:45.484370 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.618) 0:00:26.971 ********* 2025-05-28 19:28:45.484381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-28 19:28:45.484392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-28 19:28:45.484403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-28 19:28:45.484414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-28 19:28:45.484425 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.484436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-28 19:28:45.484447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-28 19:28:45.484458 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.484469 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-28 19:28:45.484480 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-28 19:28:45.484491 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-28 19:28:45.484502 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.484513 | orchestrator | 2025-05-28 19:28:45.484532 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-28 19:28:45.484551 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.433) 0:00:27.405 ********* 2025-05-28 19:28:45.484576 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:28:45.484607 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:28:45.484627 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:28:45.484645 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.484670 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:28:45.484696 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:28:45.484715 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:28:45.484735 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.484754 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-28 19:28:45.484803 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:28:45.484816 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:28:45.484827 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.484839 | orchestrator | 2025-05-28 19:28:45.484850 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-28 19:28:45.484861 | orchestrator | Wednesday 28 May 2025 19:27:04 +0000 (0:00:00.392) 0:00:27.798 ********* 2025-05-28 19:28:45.484872 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:28:45.484883 | orchestrator | 2025-05-28 19:28:45.484894 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-28 19:28:45.484906 | orchestrator | Wednesday 28 May 2025 19:27:05 +0000 (0:00:00.705) 0:00:28.504 ********* 2025-05-28 19:28:45.484917 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.484928 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.484939 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.484950 | orchestrator | 2025-05-28 19:28:45.484971 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-28 19:28:45.484982 | orchestrator | Wednesday 28 May 2025 19:27:05 +0000 (0:00:00.402) 0:00:28.907 ********* 2025-05-28 19:28:45.484993 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485004 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.485015 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.485026 | orchestrator | 2025-05-28 19:28:45.485037 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-28 19:28:45.485048 | orchestrator | Wednesday 28 May 2025 19:27:05 +0000 (0:00:00.357) 0:00:29.264 ********* 2025-05-28 19:28:45.485060 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485071 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.485082 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.485093 | orchestrator | 2025-05-28 19:28:45.485104 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-28 19:28:45.485115 | orchestrator | Wednesday 28 May 2025 19:27:06 +0000 (0:00:00.299) 0:00:29.563 ********* 2025-05-28 19:28:45.485126 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.485136 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.485147 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.485158 | orchestrator | 2025-05-28 19:28:45.485169 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-28 19:28:45.485180 | orchestrator | Wednesday 28 May 2025 19:27:06 +0000 (0:00:00.620) 0:00:30.184 ********* 2025-05-28 19:28:45.485191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:28:45.485202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:28:45.485213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:28:45.485225 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485236 | orchestrator | 2025-05-28 19:28:45.485248 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-28 19:28:45.485268 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:00.412) 0:00:30.596 ********* 2025-05-28 19:28:45.485287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:28:45.485305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:28:45.485335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:28:45.485355 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485375 | orchestrator | 2025-05-28 19:28:45.485394 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-28 19:28:45.485405 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:00.377) 0:00:30.974 ********* 2025-05-28 19:28:45.485416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:28:45.485427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:28:45.485438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:28:45.485449 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485460 | orchestrator | 2025-05-28 19:28:45.485471 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:28:45.485483 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:00.374) 0:00:31.349 ********* 2025-05-28 19:28:45.485494 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:28:45.485505 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:28:45.485516 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:28:45.485527 | orchestrator | 2025-05-28 19:28:45.485538 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-28 19:28:45.485549 | orchestrator | Wednesday 28 May 2025 19:27:08 +0000 (0:00:00.313) 0:00:31.662 ********* 2025-05-28 19:28:45.485560 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-28 19:28:45.485570 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-28 19:28:45.485588 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-28 19:28:45.485600 | orchestrator | 2025-05-28 19:28:45.485611 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-28 19:28:45.485632 | orchestrator | Wednesday 28 May 2025 19:27:09 +0000 (0:00:00.864) 0:00:32.527 ********* 2025-05-28 19:28:45.485646 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485665 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.485684 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.485703 | orchestrator | 2025-05-28 19:28:45.485721 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-28 19:28:45.485740 | orchestrator | Wednesday 28 May 2025 19:27:09 +0000 (0:00:00.504) 0:00:33.031 ********* 2025-05-28 19:28:45.485758 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485831 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.485850 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.485867 | orchestrator | 2025-05-28 19:28:45.485886 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-28 19:28:45.485918 | orchestrator | Wednesday 28 May 2025 19:27:09 +0000 (0:00:00.330) 0:00:33.361 ********* 2025-05-28 19:28:45.485938 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-28 19:28:45.485956 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.485976 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-28 19:28:45.485992 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.486003 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-28 19:28:45.486014 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.486077 | orchestrator | 2025-05-28 19:28:45.486090 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-28 19:28:45.486101 | orchestrator | Wednesday 28 May 2025 19:27:10 +0000 (0:00:00.432) 0:00:33.794 ********* 2025-05-28 19:28:45.486112 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-28 19:28:45.486123 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.486135 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-28 19:28:45.486154 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.486166 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-28 19:28:45.486177 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.486188 | orchestrator | 2025-05-28 19:28:45.486199 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-28 19:28:45.486210 | orchestrator | Wednesday 28 May 2025 19:27:10 +0000 (0:00:00.333) 0:00:34.128 ********* 2025-05-28 19:28:45.486221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-28 19:28:45.486233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-28 19:28:45.486244 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-28 19:28:45.486255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-28 19:28:45.486266 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.486277 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-28 19:28:45.486287 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-28 19:28:45.486298 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-28 19:28:45.486309 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.486320 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-28 19:28:45.486332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-28 19:28:45.486343 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.486354 | orchestrator | 2025-05-28 19:28:45.486365 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-28 19:28:45.486376 | orchestrator | Wednesday 28 May 2025 19:27:11 +0000 (0:00:00.884) 0:00:35.012 ********* 2025-05-28 19:28:45.486387 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.486398 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.486420 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:28:45.486431 | orchestrator | 2025-05-28 19:28:45.486443 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-28 19:28:45.486454 | orchestrator | Wednesday 28 May 2025 19:27:11 +0000 (0:00:00.291) 0:00:35.304 ********* 2025-05-28 19:28:45.486464 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 19:28:45.486475 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:28:45.486486 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:28:45.486497 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-28 19:28:45.486508 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 19:28:45.486519 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 19:28:45.486530 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 19:28:45.486541 | orchestrator | 2025-05-28 19:28:45.486552 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-28 19:28:45.486563 | orchestrator | Wednesday 28 May 2025 19:27:12 +0000 (0:00:01.032) 0:00:36.336 ********* 2025-05-28 19:28:45.486574 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-28 19:28:45.486585 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:28:45.486596 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:28:45.486613 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-28 19:28:45.486625 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 19:28:45.486636 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 19:28:45.486646 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 19:28:45.486657 | orchestrator | 2025-05-28 19:28:45.486668 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-28 19:28:45.486679 | orchestrator | Wednesday 28 May 2025 19:27:14 +0000 (0:00:01.802) 0:00:38.138 ********* 2025-05-28 19:28:45.486690 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:28:45.486701 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:28:45.486712 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-28 19:28:45.486724 | orchestrator | 2025-05-28 19:28:45.486735 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-28 19:28:45.486762 | orchestrator | Wednesday 28 May 2025 19:27:15 +0000 (0:00:00.539) 0:00:38.678 ********* 2025-05-28 19:28:45.486913 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 19:28:45.486935 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 19:28:45.486947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 19:28:45.486958 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 19:28:45.486987 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-28 19:28:45.486999 | orchestrator | 2025-05-28 19:28:45.487011 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-28 19:28:45.487022 | orchestrator | Wednesday 28 May 2025 19:27:55 +0000 (0:00:40.099) 0:01:18.777 ********* 2025-05-28 19:28:45.487033 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487044 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487055 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487067 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487078 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487089 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487100 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-28 19:28:45.487111 | orchestrator | 2025-05-28 19:28:45.487122 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-28 19:28:45.487133 | orchestrator | Wednesday 28 May 2025 19:28:15 +0000 (0:00:20.330) 0:01:39.108 ********* 2025-05-28 19:28:45.487144 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487155 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487166 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487177 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487199 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487210 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-28 19:28:45.487221 | orchestrator | 2025-05-28 19:28:45.487233 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-28 19:28:45.487244 | orchestrator | Wednesday 28 May 2025 19:28:24 +0000 (0:00:09.248) 0:01:48.356 ********* 2025-05-28 19:28:45.487255 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487266 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 19:28:45.487284 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 19:28:45.487294 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487305 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 19:28:45.487314 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 19:28:45.487324 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487334 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 19:28:45.487344 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 19:28:45.487354 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487364 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 19:28:45.487386 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 19:28:45.487403 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487413 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 19:28:45.487423 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 19:28:45.487433 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-28 19:28:45.487442 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-28 19:28:45.487452 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-28 19:28:45.487462 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-28 19:28:45.487472 | orchestrator | 2025-05-28 19:28:45.487482 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:28:45.487492 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-28 19:28:45.487503 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-28 19:28:45.487513 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-28 19:28:45.487523 | orchestrator | 2025-05-28 19:28:45.487532 | orchestrator | 2025-05-28 19:28:45.487542 | orchestrator | 2025-05-28 19:28:45.487552 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:28:45.487562 | orchestrator | Wednesday 28 May 2025 19:28:42 +0000 (0:00:17.975) 0:02:06.331 ********* 2025-05-28 19:28:45.487572 | orchestrator | =============================================================================== 2025-05-28 19:28:45.487581 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.10s 2025-05-28 19:28:45.487591 | orchestrator | generate keys ---------------------------------------------------------- 20.33s 2025-05-28 19:28:45.487601 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.98s 2025-05-28 19:28:45.487611 | orchestrator | get keys from monitors -------------------------------------------------- 9.25s 2025-05-28 19:28:45.487621 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.27s 2025-05-28 19:28:45.487631 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.17s 2025-05-28 19:28:45.487640 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.80s 2025-05-28 19:28:45.487651 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.36s 2025-05-28 19:28:45.487661 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.36s 2025-05-28 19:28:45.487671 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.03s 2025-05-28 19:28:45.487681 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.94s 2025-05-28 19:28:45.487690 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.90s 2025-05-28 19:28:45.487700 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.88s 2025-05-28 19:28:45.487710 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.86s 2025-05-28 19:28:45.487720 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.84s 2025-05-28 19:28:45.487730 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.80s 2025-05-28 19:28:45.487740 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.76s 2025-05-28 19:28:45.487749 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.71s 2025-05-28 19:28:45.487759 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.69s 2025-05-28 19:28:45.487787 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.68s 2025-05-28 19:28:45.487805 | orchestrator | 2025-05-28 19:28:45.487815 | orchestrator | 2025-05-28 19:28:45.487825 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:28:45.487835 | orchestrator | 2025-05-28 19:28:45.487845 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:28:45.487855 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.355) 0:00:00.355 ********* 2025-05-28 19:28:45.487865 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.487875 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.487890 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.487900 | orchestrator | 2025-05-28 19:28:45.487910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:28:45.487920 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.424) 0:00:00.779 ********* 2025-05-28 19:28:45.487930 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-28 19:28:45.487940 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-28 19:28:45.487950 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-28 19:28:45.487960 | orchestrator | 2025-05-28 19:28:45.487970 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-28 19:28:45.487980 | orchestrator | 2025-05-28 19:28:45.487990 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 19:28:45.488000 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.298) 0:00:01.078 ********* 2025-05-28 19:28:45.488010 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:28:45.488020 | orchestrator | 2025-05-28 19:28:45.488036 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-28 19:28:45.488046 | orchestrator | Wednesday 28 May 2025 19:27:04 +0000 (0:00:00.721) 0:00:01.799 ********* 2025-05-28 19:28:45.488059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.488092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.488105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.488121 | orchestrator | 2025-05-28 19:28:45.488132 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-28 19:28:45.488142 | orchestrator | Wednesday 28 May 2025 19:27:06 +0000 (0:00:01.588) 0:00:03.387 ********* 2025-05-28 19:28:45.488152 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.488162 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.488172 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.488181 | orchestrator | 2025-05-28 19:28:45.488192 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 19:28:45.488202 | orchestrator | Wednesday 28 May 2025 19:27:06 +0000 (0:00:00.260) 0:00:03.648 ********* 2025-05-28 19:28:45.488211 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-28 19:28:45.488221 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-28 19:28:45.488231 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-28 19:28:45.488241 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-28 19:28:45.488254 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-28 19:28:45.488264 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-28 19:28:45.488274 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-28 19:28:45.488284 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-28 19:28:45.488294 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-28 19:28:45.488304 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-28 19:28:45.488314 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-28 19:28:45.488324 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-28 19:28:45.488339 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-28 19:28:45.488350 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-28 19:28:45.488360 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-28 19:28:45.488369 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-28 19:28:45.488379 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-28 19:28:45.488389 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-28 19:28:45.488399 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-28 19:28:45.488409 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-28 19:28:45.488419 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-28 19:28:45.488429 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-28 19:28:45.488440 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-28 19:28:45.488450 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-28 19:28:45.488460 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-28 19:28:45.488475 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-28 19:28:45.488485 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-28 19:28:45.488495 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-28 19:28:45.488505 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-28 19:28:45.488514 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-28 19:28:45.488524 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-28 19:28:45.488534 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-28 19:28:45.488544 | orchestrator | 2025-05-28 19:28:45.488554 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.488564 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:00.912) 0:00:04.560 ********* 2025-05-28 19:28:45.488574 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.488584 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.488594 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.488604 | orchestrator | 2025-05-28 19:28:45.488614 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.488624 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:00.439) 0:00:05.000 ********* 2025-05-28 19:28:45.488634 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.488643 | orchestrator | 2025-05-28 19:28:45.488653 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.488663 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:00.124) 0:00:05.124 ********* 2025-05-28 19:28:45.488673 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.488683 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.488693 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.488703 | orchestrator | 2025-05-28 19:28:45.488713 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.488727 | orchestrator | Wednesday 28 May 2025 19:27:08 +0000 (0:00:00.407) 0:00:05.532 ********* 2025-05-28 19:28:45.488737 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.488747 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.488757 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.488788 | orchestrator | 2025-05-28 19:28:45.488798 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.488808 | orchestrator | Wednesday 28 May 2025 19:27:08 +0000 (0:00:00.343) 0:00:05.875 ********* 2025-05-28 19:28:45.488818 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.488828 | orchestrator | 2025-05-28 19:28:45.488838 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.488848 | orchestrator | Wednesday 28 May 2025 19:27:08 +0000 (0:00:00.118) 0:00:05.993 ********* 2025-05-28 19:28:45.488858 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.488868 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.488878 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.488888 | orchestrator | 2025-05-28 19:28:45.488898 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.488914 | orchestrator | Wednesday 28 May 2025 19:27:09 +0000 (0:00:00.452) 0:00:06.446 ********* 2025-05-28 19:28:45.488929 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.488939 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.488949 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.488959 | orchestrator | 2025-05-28 19:28:45.488969 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.488979 | orchestrator | Wednesday 28 May 2025 19:27:09 +0000 (0:00:00.471) 0:00:06.917 ********* 2025-05-28 19:28:45.488989 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.488999 | orchestrator | 2025-05-28 19:28:45.489009 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.489019 | orchestrator | Wednesday 28 May 2025 19:27:09 +0000 (0:00:00.155) 0:00:07.072 ********* 2025-05-28 19:28:45.489029 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489039 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.489049 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.489059 | orchestrator | 2025-05-28 19:28:45.489069 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.489079 | orchestrator | Wednesday 28 May 2025 19:27:10 +0000 (0:00:00.406) 0:00:07.478 ********* 2025-05-28 19:28:45.489089 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.489099 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.489109 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.489119 | orchestrator | 2025-05-28 19:28:45.489129 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.489139 | orchestrator | Wednesday 28 May 2025 19:27:10 +0000 (0:00:00.434) 0:00:07.913 ********* 2025-05-28 19:28:45.489149 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489159 | orchestrator | 2025-05-28 19:28:45.489169 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.489179 | orchestrator | Wednesday 28 May 2025 19:27:10 +0000 (0:00:00.140) 0:00:08.053 ********* 2025-05-28 19:28:45.489189 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489199 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.489209 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.489219 | orchestrator | 2025-05-28 19:28:45.489229 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.489239 | orchestrator | Wednesday 28 May 2025 19:27:11 +0000 (0:00:00.422) 0:00:08.476 ********* 2025-05-28 19:28:45.489249 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.489259 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.489269 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.489279 | orchestrator | 2025-05-28 19:28:45.489289 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.489299 | orchestrator | Wednesday 28 May 2025 19:27:11 +0000 (0:00:00.295) 0:00:08.771 ********* 2025-05-28 19:28:45.489309 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489319 | orchestrator | 2025-05-28 19:28:45.489329 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.489339 | orchestrator | Wednesday 28 May 2025 19:27:11 +0000 (0:00:00.284) 0:00:09.055 ********* 2025-05-28 19:28:45.489349 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489360 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.489369 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.489379 | orchestrator | 2025-05-28 19:28:45.489389 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.489399 | orchestrator | Wednesday 28 May 2025 19:27:12 +0000 (0:00:00.264) 0:00:09.320 ********* 2025-05-28 19:28:45.489409 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.489419 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.489429 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.489439 | orchestrator | 2025-05-28 19:28:45.489449 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.489459 | orchestrator | Wednesday 28 May 2025 19:27:12 +0000 (0:00:00.419) 0:00:09.740 ********* 2025-05-28 19:28:45.489478 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489488 | orchestrator | 2025-05-28 19:28:45.489498 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.489509 | orchestrator | Wednesday 28 May 2025 19:27:12 +0000 (0:00:00.141) 0:00:09.881 ********* 2025-05-28 19:28:45.489518 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489528 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.489538 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.489548 | orchestrator | 2025-05-28 19:28:45.489558 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.489568 | orchestrator | Wednesday 28 May 2025 19:27:13 +0000 (0:00:00.436) 0:00:10.318 ********* 2025-05-28 19:28:45.489578 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.489589 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.489599 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.489609 | orchestrator | 2025-05-28 19:28:45.489619 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.489629 | orchestrator | Wednesday 28 May 2025 19:27:13 +0000 (0:00:00.513) 0:00:10.832 ********* 2025-05-28 19:28:45.489639 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489649 | orchestrator | 2025-05-28 19:28:45.489663 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.489673 | orchestrator | Wednesday 28 May 2025 19:27:13 +0000 (0:00:00.222) 0:00:11.054 ********* 2025-05-28 19:28:45.489683 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489693 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.489703 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.489713 | orchestrator | 2025-05-28 19:28:45.489723 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.489733 | orchestrator | Wednesday 28 May 2025 19:27:14 +0000 (0:00:00.465) 0:00:11.520 ********* 2025-05-28 19:28:45.489743 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.489752 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.489762 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.489796 | orchestrator | 2025-05-28 19:28:45.489806 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.489816 | orchestrator | Wednesday 28 May 2025 19:27:14 +0000 (0:00:00.299) 0:00:11.820 ********* 2025-05-28 19:28:45.489826 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489836 | orchestrator | 2025-05-28 19:28:45.489853 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.489864 | orchestrator | Wednesday 28 May 2025 19:27:14 +0000 (0:00:00.236) 0:00:12.056 ********* 2025-05-28 19:28:45.489874 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.489884 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.489893 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.489903 | orchestrator | 2025-05-28 19:28:45.489913 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.489923 | orchestrator | Wednesday 28 May 2025 19:27:15 +0000 (0:00:00.320) 0:00:12.376 ********* 2025-05-28 19:28:45.489933 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.489943 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.489952 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.489962 | orchestrator | 2025-05-28 19:28:45.489972 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.489982 | orchestrator | Wednesday 28 May 2025 19:27:15 +0000 (0:00:00.490) 0:00:12.866 ********* 2025-05-28 19:28:45.489992 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490002 | orchestrator | 2025-05-28 19:28:45.490011 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.490046 | orchestrator | Wednesday 28 May 2025 19:27:15 +0000 (0:00:00.116) 0:00:12.983 ********* 2025-05-28 19:28:45.490058 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490068 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.490077 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.490095 | orchestrator | 2025-05-28 19:28:45.490104 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.490114 | orchestrator | Wednesday 28 May 2025 19:27:16 +0000 (0:00:00.408) 0:00:13.391 ********* 2025-05-28 19:28:45.490125 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.490134 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.490144 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.490154 | orchestrator | 2025-05-28 19:28:45.490164 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.490174 | orchestrator | Wednesday 28 May 2025 19:27:16 +0000 (0:00:00.400) 0:00:13.792 ********* 2025-05-28 19:28:45.490184 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490193 | orchestrator | 2025-05-28 19:28:45.490203 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.490213 | orchestrator | Wednesday 28 May 2025 19:27:16 +0000 (0:00:00.121) 0:00:13.914 ********* 2025-05-28 19:28:45.490223 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490233 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.490243 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.490253 | orchestrator | 2025-05-28 19:28:45.490263 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-28 19:28:45.490273 | orchestrator | Wednesday 28 May 2025 19:27:17 +0000 (0:00:00.385) 0:00:14.299 ********* 2025-05-28 19:28:45.490282 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:28:45.490292 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:28:45.490302 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:28:45.490312 | orchestrator | 2025-05-28 19:28:45.490322 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-28 19:28:45.490332 | orchestrator | Wednesday 28 May 2025 19:27:17 +0000 (0:00:00.391) 0:00:14.691 ********* 2025-05-28 19:28:45.490342 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490352 | orchestrator | 2025-05-28 19:28:45.490362 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-28 19:28:45.490372 | orchestrator | Wednesday 28 May 2025 19:27:17 +0000 (0:00:00.120) 0:00:14.812 ********* 2025-05-28 19:28:45.490382 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490392 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.490402 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.490411 | orchestrator | 2025-05-28 19:28:45.490422 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-28 19:28:45.490432 | orchestrator | Wednesday 28 May 2025 19:27:18 +0000 (0:00:00.373) 0:00:15.186 ********* 2025-05-28 19:28:45.490441 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:28:45.490451 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:28:45.490461 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:28:45.490471 | orchestrator | 2025-05-28 19:28:45.490481 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-28 19:28:45.490491 | orchestrator | Wednesday 28 May 2025 19:27:20 +0000 (0:00:02.406) 0:00:17.593 ********* 2025-05-28 19:28:45.490501 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-28 19:28:45.490511 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-28 19:28:45.490521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-28 19:28:45.490531 | orchestrator | 2025-05-28 19:28:45.490541 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-28 19:28:45.490550 | orchestrator | Wednesday 28 May 2025 19:27:22 +0000 (0:00:02.263) 0:00:19.856 ********* 2025-05-28 19:28:45.490568 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-28 19:28:45.490579 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-28 19:28:45.490590 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-28 19:28:45.490605 | orchestrator | 2025-05-28 19:28:45.490615 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-28 19:28:45.490625 | orchestrator | Wednesday 28 May 2025 19:27:24 +0000 (0:00:02.320) 0:00:22.176 ********* 2025-05-28 19:28:45.490635 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-28 19:28:45.490645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-28 19:28:45.490661 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-28 19:28:45.490672 | orchestrator | 2025-05-28 19:28:45.490682 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-28 19:28:45.490692 | orchestrator | Wednesday 28 May 2025 19:27:27 +0000 (0:00:02.029) 0:00:24.206 ********* 2025-05-28 19:28:45.490701 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490711 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.490721 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.490731 | orchestrator | 2025-05-28 19:28:45.490740 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-28 19:28:45.490750 | orchestrator | Wednesday 28 May 2025 19:27:27 +0000 (0:00:00.553) 0:00:24.760 ********* 2025-05-28 19:28:45.490760 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.490808 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.490819 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.490829 | orchestrator | 2025-05-28 19:28:45.490839 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 19:28:45.490848 | orchestrator | Wednesday 28 May 2025 19:27:28 +0000 (0:00:00.434) 0:00:25.195 ********* 2025-05-28 19:28:45.490858 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:28:45.490868 | orchestrator | 2025-05-28 19:28:45.490877 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-28 19:28:45.490887 | orchestrator | Wednesday 28 May 2025 19:27:28 +0000 (0:00:00.723) 0:00:25.918 ********* 2025-05-28 19:28:45.490899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.490931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.490948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.490965 | orchestrator | 2025-05-28 19:28:45.490975 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-28 19:28:45.490985 | orchestrator | Wednesday 28 May 2025 19:27:30 +0000 (0:00:01.795) 0:00:27.713 ********* 2025-05-28 19:28:45.491003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:28:45.491015 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.491031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:28:45.491047 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.491065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:28:45.491077 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.491087 | orchestrator | 2025-05-28 19:28:45.491097 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-28 19:28:45.491107 | orchestrator | Wednesday 28 May 2025 19:27:31 +0000 (0:00:01.374) 0:00:29.088 ********* 2025-05-28 19:28:45.491128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:28:45.491145 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.491156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:28:45.491172 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.491194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-28 19:28:45.491206 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.491216 | orchestrator | 2025-05-28 19:28:45.491226 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-28 19:28:45.491236 | orchestrator | Wednesday 28 May 2025 19:27:33 +0000 (0:00:01.170) 0:00:30.258 ********* 2025-05-28 19:28:45.491246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.491275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.491288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-28 19:28:45.491304 | orchestrator | 2025-05-28 19:28:45.491321 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 19:28:45.491331 | orchestrator | Wednesday 28 May 2025 19:27:38 +0000 (0:00:05.208) 0:00:35.467 ********* 2025-05-28 19:28:45.491340 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:28:45.491350 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:28:45.491360 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:28:45.491370 | orchestrator | 2025-05-28 19:28:45.491379 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-28 19:28:45.491389 | orchestrator | Wednesday 28 May 2025 19:27:38 +0000 (0:00:00.349) 0:00:35.816 ********* 2025-05-28 19:28:45.491399 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:28:45.491409 | orchestrator | 2025-05-28 19:28:45.491419 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-28 19:28:45.491428 | orchestrator | Wednesday 28 May 2025 19:27:39 +0000 (0:00:00.514) 0:00:36.330 ********* 2025-05-28 19:28:45.491438 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:28:45.491448 | orchestrator | 2025-05-28 19:28:45.491463 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-28 19:28:45.491473 | orchestrator | Wednesday 28 May 2025 19:27:41 +0000 (0:00:02.526) 0:00:38.856 ********* 2025-05-28 19:28:45.491483 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:28:45.491493 | orchestrator | 2025-05-28 19:28:45.491503 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-28 19:28:45.491513 | orchestrator | Wednesday 28 May 2025 19:27:43 +0000 (0:00:02.285) 0:00:41.141 ********* 2025-05-28 19:28:45.491523 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:28:45.491533 | orchestrator | 2025-05-28 19:28:45.491543 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-28 19:28:45.491553 | orchestrator | Wednesday 28 May 2025 19:27:57 +0000 (0:00:13.486) 0:00:54.627 ********* 2025-05-28 19:28:45.491562 | orchestrator | 2025-05-28 19:28:45.491572 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-28 19:28:45.491582 | orchestrator | Wednesday 28 May 2025 19:27:57 +0000 (0:00:00.058) 0:00:54.686 ********* 2025-05-28 19:28:45.491592 | orchestrator | 2025-05-28 19:28:45.491602 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-28 19:28:45.491611 | orchestrator | Wednesday 28 May 2025 19:27:57 +0000 (0:00:00.187) 0:00:54.873 ********* 2025-05-28 19:28:45.491621 | orchestrator | 2025-05-28 19:28:45.491631 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-28 19:28:45.491641 | orchestrator | Wednesday 28 May 2025 19:27:57 +0000 (0:00:00.058) 0:00:54.932 ********* 2025-05-28 19:28:45.491650 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:28:45.491666 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:28:45.491676 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:28:45.491686 | orchestrator | 2025-05-28 19:28:45.491695 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:28:45.491705 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-28 19:28:45.491715 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-28 19:28:45.491725 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-28 19:28:45.491735 | orchestrator | 2025-05-28 19:28:45.491745 | orchestrator | 2025-05-28 19:28:45.491755 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:28:45.491777 | orchestrator | Wednesday 28 May 2025 19:28:42 +0000 (0:00:44.865) 0:01:39.798 ********* 2025-05-28 19:28:45.491788 | orchestrator | =============================================================================== 2025-05-28 19:28:45.491798 | orchestrator | horizon : Restart horizon container ------------------------------------ 44.87s 2025-05-28 19:28:45.491808 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.49s 2025-05-28 19:28:45.491817 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.21s 2025-05-28 19:28:45.491830 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.53s 2025-05-28 19:28:45.491847 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.41s 2025-05-28 19:28:45.491865 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.32s 2025-05-28 19:28:45.491883 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.29s 2025-05-28 19:28:45.491900 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.26s 2025-05-28 19:28:45.491913 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.03s 2025-05-28 19:28:45.491923 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.80s 2025-05-28 19:28:45.491933 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.59s 2025-05-28 19:28:45.491943 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.37s 2025-05-28 19:28:45.491953 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.17s 2025-05-28 19:28:45.491963 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.91s 2025-05-28 19:28:45.491972 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-05-28 19:28:45.491982 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-05-28 19:28:45.491992 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.55s 2025-05-28 19:28:45.492006 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-05-28 19:28:45.492016 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-05-28 19:28:45.492026 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-05-28 19:28:45.492036 | orchestrator | 2025-05-28 19:28:45 | INFO  | Task cb3ad593-e631-4a31-91d9-bdae3f87d5e2 is in state SUCCESS 2025-05-28 19:28:45.492046 | orchestrator | 2025-05-28 19:28:45 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:28:45.492056 | orchestrator | 2025-05-28 19:28:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:45.492066 | orchestrator | 2025-05-28 19:28:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:48.548611 | orchestrator | 2025-05-28 19:28:48 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:48.548747 | orchestrator | 2025-05-28 19:28:48 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:28:48.548794 | orchestrator | 2025-05-28 19:28:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:48.548809 | orchestrator | 2025-05-28 19:28:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:51.602603 | orchestrator | 2025-05-28 19:28:51 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:51.605279 | orchestrator | 2025-05-28 19:28:51 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:28:51.606867 | orchestrator | 2025-05-28 19:28:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:51.607154 | orchestrator | 2025-05-28 19:28:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:54.669440 | orchestrator | 2025-05-28 19:28:54 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:54.670870 | orchestrator | 2025-05-28 19:28:54 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:28:54.672234 | orchestrator | 2025-05-28 19:28:54 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:28:54.673527 | orchestrator | 2025-05-28 19:28:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:54.673711 | orchestrator | 2025-05-28 19:28:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:28:57.732206 | orchestrator | 2025-05-28 19:28:57 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:28:57.733140 | orchestrator | 2025-05-28 19:28:57 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:28:57.735432 | orchestrator | 2025-05-28 19:28:57 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:28:57.737425 | orchestrator | 2025-05-28 19:28:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:28:57.737575 | orchestrator | 2025-05-28 19:28:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:00.783966 | orchestrator | 2025-05-28 19:29:00 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:00.785479 | orchestrator | 2025-05-28 19:29:00 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:00.787121 | orchestrator | 2025-05-28 19:29:00 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:00.788894 | orchestrator | 2025-05-28 19:29:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:00.789199 | orchestrator | 2025-05-28 19:29:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:03.843702 | orchestrator | 2025-05-28 19:29:03 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:03.845974 | orchestrator | 2025-05-28 19:29:03 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:03.847002 | orchestrator | 2025-05-28 19:29:03 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:03.848282 | orchestrator | 2025-05-28 19:29:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:03.848297 | orchestrator | 2025-05-28 19:29:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:06.890120 | orchestrator | 2025-05-28 19:29:06 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:06.892037 | orchestrator | 2025-05-28 19:29:06 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:06.893723 | orchestrator | 2025-05-28 19:29:06 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:06.895711 | orchestrator | 2025-05-28 19:29:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:06.895780 | orchestrator | 2025-05-28 19:29:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:09.939134 | orchestrator | 2025-05-28 19:29:09 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:09.941250 | orchestrator | 2025-05-28 19:29:09 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:09.941964 | orchestrator | 2025-05-28 19:29:09 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:09.943806 | orchestrator | 2025-05-28 19:29:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:09.943834 | orchestrator | 2025-05-28 19:29:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:12.997585 | orchestrator | 2025-05-28 19:29:12 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:12.999167 | orchestrator | 2025-05-28 19:29:12 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:13.000469 | orchestrator | 2025-05-28 19:29:12 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:13.001805 | orchestrator | 2025-05-28 19:29:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:13.001825 | orchestrator | 2025-05-28 19:29:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:16.061587 | orchestrator | 2025-05-28 19:29:16 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:16.063214 | orchestrator | 2025-05-28 19:29:16 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:16.064038 | orchestrator | 2025-05-28 19:29:16 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:16.065541 | orchestrator | 2025-05-28 19:29:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:16.065564 | orchestrator | 2025-05-28 19:29:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:19.119794 | orchestrator | 2025-05-28 19:29:19 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:19.120912 | orchestrator | 2025-05-28 19:29:19 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:19.122199 | orchestrator | 2025-05-28 19:29:19 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:19.123573 | orchestrator | 2025-05-28 19:29:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:19.123600 | orchestrator | 2025-05-28 19:29:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:22.167986 | orchestrator | 2025-05-28 19:29:22 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:22.170449 | orchestrator | 2025-05-28 19:29:22 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state STARTED 2025-05-28 19:29:22.171499 | orchestrator | 2025-05-28 19:29:22 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:22.173530 | orchestrator | 2025-05-28 19:29:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:22.173805 | orchestrator | 2025-05-28 19:29:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:25.229279 | orchestrator | 2025-05-28 19:29:25 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state STARTED 2025-05-28 19:29:25.232708 | orchestrator | 2025-05-28 19:29:25.232767 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-28 19:29:25.232778 | orchestrator | 2025-05-28 19:29:25.232787 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-28 19:29:25.232796 | orchestrator | 2025-05-28 19:29:25.232804 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-28 19:29:25.232814 | orchestrator | Wednesday 28 May 2025 19:28:55 +0000 (0:00:00.444) 0:00:00.444 ********* 2025-05-28 19:29:25.232822 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-28 19:29:25.232832 | orchestrator | 2025-05-28 19:29:25.232840 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-28 19:29:25.232849 | orchestrator | Wednesday 28 May 2025 19:28:56 +0000 (0:00:00.191) 0:00:00.636 ********* 2025-05-28 19:29:25.232859 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:29:25.232868 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 19:29:25.232889 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 19:29:25.232898 | orchestrator | 2025-05-28 19:29:25.232907 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-28 19:29:25.232916 | orchestrator | Wednesday 28 May 2025 19:28:56 +0000 (0:00:00.848) 0:00:01.485 ********* 2025-05-28 19:29:25.232925 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-28 19:29:25.232933 | orchestrator | 2025-05-28 19:29:25.232942 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-28 19:29:25.232951 | orchestrator | Wednesday 28 May 2025 19:28:57 +0000 (0:00:00.226) 0:00:01.711 ********* 2025-05-28 19:29:25.232959 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.232968 | orchestrator | 2025-05-28 19:29:25.232977 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-28 19:29:25.232986 | orchestrator | Wednesday 28 May 2025 19:28:57 +0000 (0:00:00.601) 0:00:02.312 ********* 2025-05-28 19:29:25.232994 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233003 | orchestrator | 2025-05-28 19:29:25.233012 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-28 19:29:25.233021 | orchestrator | Wednesday 28 May 2025 19:28:57 +0000 (0:00:00.124) 0:00:02.437 ********* 2025-05-28 19:29:25.233030 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233039 | orchestrator | 2025-05-28 19:29:25.233048 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-28 19:29:25.233056 | orchestrator | Wednesday 28 May 2025 19:28:58 +0000 (0:00:00.459) 0:00:02.897 ********* 2025-05-28 19:29:25.233065 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233074 | orchestrator | 2025-05-28 19:29:25.233083 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-28 19:29:25.233092 | orchestrator | Wednesday 28 May 2025 19:28:58 +0000 (0:00:00.133) 0:00:03.030 ********* 2025-05-28 19:29:25.233100 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233109 | orchestrator | 2025-05-28 19:29:25.233118 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-28 19:29:25.233126 | orchestrator | Wednesday 28 May 2025 19:28:58 +0000 (0:00:00.126) 0:00:03.156 ********* 2025-05-28 19:29:25.233135 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233144 | orchestrator | 2025-05-28 19:29:25.233153 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-28 19:29:25.233161 | orchestrator | Wednesday 28 May 2025 19:28:58 +0000 (0:00:00.130) 0:00:03.287 ********* 2025-05-28 19:29:25.233170 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.233179 | orchestrator | 2025-05-28 19:29:25.233188 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-28 19:29:25.233222 | orchestrator | Wednesday 28 May 2025 19:28:58 +0000 (0:00:00.150) 0:00:03.437 ********* 2025-05-28 19:29:25.233238 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233253 | orchestrator | 2025-05-28 19:29:25.233269 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-28 19:29:25.233284 | orchestrator | Wednesday 28 May 2025 19:28:58 +0000 (0:00:00.110) 0:00:03.547 ********* 2025-05-28 19:29:25.233298 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:29:25.233307 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:29:25.233316 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:29:25.233325 | orchestrator | 2025-05-28 19:29:25.233333 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-28 19:29:25.233342 | orchestrator | Wednesday 28 May 2025 19:28:59 +0000 (0:00:00.864) 0:00:04.412 ********* 2025-05-28 19:29:25.233351 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233360 | orchestrator | 2025-05-28 19:29:25.233369 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-28 19:29:25.233377 | orchestrator | Wednesday 28 May 2025 19:29:00 +0000 (0:00:00.242) 0:00:04.654 ********* 2025-05-28 19:29:25.233386 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:29:25.233395 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:29:25.233403 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:29:25.233412 | orchestrator | 2025-05-28 19:29:25.233421 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-28 19:29:25.233430 | orchestrator | Wednesday 28 May 2025 19:29:02 +0000 (0:00:01.955) 0:00:06.609 ********* 2025-05-28 19:29:25.233438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:29:25.233447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:29:25.233456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:29:25.233464 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.233473 | orchestrator | 2025-05-28 19:29:25.233482 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-28 19:29:25.233501 | orchestrator | Wednesday 28 May 2025 19:29:02 +0000 (0:00:00.417) 0:00:07.027 ********* 2025-05-28 19:29:25.233512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-28 19:29:25.233524 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-28 19:29:25.233537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-28 19:29:25.233547 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.233555 | orchestrator | 2025-05-28 19:29:25.233564 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-28 19:29:25.233573 | orchestrator | Wednesday 28 May 2025 19:29:03 +0000 (0:00:00.780) 0:00:07.808 ********* 2025-05-28 19:29:25.233583 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:29:25.233594 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:29:25.233610 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-28 19:29:25.233619 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.233628 | orchestrator | 2025-05-28 19:29:25.233637 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-28 19:29:25.233645 | orchestrator | Wednesday 28 May 2025 19:29:03 +0000 (0:00:00.231) 0:00:08.039 ********* 2025-05-28 19:29:25.233656 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '276687da0d8a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-28 19:29:00.739951', 'end': '2025-05-28 19:29:00.780592', 'delta': '0:00:00.040641', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['276687da0d8a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-28 19:29:25.233667 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'c477ebe64635', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-28 19:29:01.298861', 'end': '2025-05-28 19:29:01.342898', 'delta': '0:00:00.044037', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c477ebe64635'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-28 19:29:25.233682 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '60398fb80f6e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-28 19:29:01.880724', 'end': '2025-05-28 19:29:01.916882', 'delta': '0:00:00.036158', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['60398fb80f6e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-28 19:29:25.233692 | orchestrator | 2025-05-28 19:29:25.233701 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-28 19:29:25.233710 | orchestrator | Wednesday 28 May 2025 19:29:03 +0000 (0:00:00.210) 0:00:08.250 ********* 2025-05-28 19:29:25.233777 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233787 | orchestrator | 2025-05-28 19:29:25.233795 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-28 19:29:25.233804 | orchestrator | Wednesday 28 May 2025 19:29:03 +0000 (0:00:00.282) 0:00:08.532 ********* 2025-05-28 19:29:25.233813 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-28 19:29:25.233828 | orchestrator | 2025-05-28 19:29:25.233837 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-28 19:29:25.233845 | orchestrator | Wednesday 28 May 2025 19:29:05 +0000 (0:00:01.578) 0:00:10.111 ********* 2025-05-28 19:29:25.233854 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.233863 | orchestrator | 2025-05-28 19:29:25.233872 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-28 19:29:25.233881 | orchestrator | Wednesday 28 May 2025 19:29:05 +0000 (0:00:00.145) 0:00:10.257 ********* 2025-05-28 19:29:25.233889 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.233898 | orchestrator | 2025-05-28 19:29:25.233906 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-28 19:29:25.233915 | orchestrator | Wednesday 28 May 2025 19:29:05 +0000 (0:00:00.240) 0:00:10.497 ********* 2025-05-28 19:29:25.233924 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.233932 | orchestrator | 2025-05-28 19:29:25.233941 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-28 19:29:25.233950 | orchestrator | Wednesday 28 May 2025 19:29:06 +0000 (0:00:00.116) 0:00:10.613 ********* 2025-05-28 19:29:25.233959 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.233967 | orchestrator | 2025-05-28 19:29:25.233976 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-28 19:29:25.233985 | orchestrator | Wednesday 28 May 2025 19:29:06 +0000 (0:00:00.131) 0:00:10.745 ********* 2025-05-28 19:29:25.233994 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234002 | orchestrator | 2025-05-28 19:29:25.234011 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-28 19:29:25.234063 | orchestrator | Wednesday 28 May 2025 19:29:06 +0000 (0:00:00.222) 0:00:10.968 ********* 2025-05-28 19:29:25.234072 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234081 | orchestrator | 2025-05-28 19:29:25.234090 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-28 19:29:25.234099 | orchestrator | Wednesday 28 May 2025 19:29:06 +0000 (0:00:00.124) 0:00:11.092 ********* 2025-05-28 19:29:25.234108 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234116 | orchestrator | 2025-05-28 19:29:25.234125 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-28 19:29:25.234134 | orchestrator | Wednesday 28 May 2025 19:29:06 +0000 (0:00:00.137) 0:00:11.229 ********* 2025-05-28 19:29:25.234143 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234151 | orchestrator | 2025-05-28 19:29:25.234160 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-28 19:29:25.234169 | orchestrator | Wednesday 28 May 2025 19:29:06 +0000 (0:00:00.129) 0:00:11.359 ********* 2025-05-28 19:29:25.234178 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234186 | orchestrator | 2025-05-28 19:29:25.234195 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-28 19:29:25.234204 | orchestrator | Wednesday 28 May 2025 19:29:06 +0000 (0:00:00.121) 0:00:11.480 ********* 2025-05-28 19:29:25.234213 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234222 | orchestrator | 2025-05-28 19:29:25.234230 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-28 19:29:25.234239 | orchestrator | Wednesday 28 May 2025 19:29:07 +0000 (0:00:00.115) 0:00:11.596 ********* 2025-05-28 19:29:25.234249 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234257 | orchestrator | 2025-05-28 19:29:25.234266 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-28 19:29:25.234275 | orchestrator | Wednesday 28 May 2025 19:29:07 +0000 (0:00:00.306) 0:00:11.902 ********* 2025-05-28 19:29:25.234284 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234292 | orchestrator | 2025-05-28 19:29:25.234301 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-28 19:29:25.234310 | orchestrator | Wednesday 28 May 2025 19:29:07 +0000 (0:00:00.126) 0:00:12.028 ********* 2025-05-28 19:29:25.234319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-28 19:29:25.234468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7a9cf88-9364-41b8-88ad-d6642f89e1c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:29:25.234488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-28-18-27-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-28 19:29:25.234498 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234507 | orchestrator | 2025-05-28 19:29:25.234516 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-28 19:29:25.234526 | orchestrator | Wednesday 28 May 2025 19:29:07 +0000 (0:00:00.260) 0:00:12.289 ********* 2025-05-28 19:29:25.234534 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234543 | orchestrator | 2025-05-28 19:29:25.234552 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-28 19:29:25.234561 | orchestrator | Wednesday 28 May 2025 19:29:07 +0000 (0:00:00.232) 0:00:12.522 ********* 2025-05-28 19:29:25.234570 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234578 | orchestrator | 2025-05-28 19:29:25.234587 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-28 19:29:25.234596 | orchestrator | Wednesday 28 May 2025 19:29:08 +0000 (0:00:00.131) 0:00:12.653 ********* 2025-05-28 19:29:25.234605 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234614 | orchestrator | 2025-05-28 19:29:25.234623 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-28 19:29:25.234632 | orchestrator | Wednesday 28 May 2025 19:29:08 +0000 (0:00:00.134) 0:00:12.788 ********* 2025-05-28 19:29:25.234640 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.234649 | orchestrator | 2025-05-28 19:29:25.234658 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-28 19:29:25.234666 | orchestrator | Wednesday 28 May 2025 19:29:08 +0000 (0:00:00.492) 0:00:13.280 ********* 2025-05-28 19:29:25.234675 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.234684 | orchestrator | 2025-05-28 19:29:25.234693 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-28 19:29:25.234702 | orchestrator | Wednesday 28 May 2025 19:29:08 +0000 (0:00:00.139) 0:00:13.420 ********* 2025-05-28 19:29:25.234733 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.234743 | orchestrator | 2025-05-28 19:29:25.234752 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-28 19:29:25.234761 | orchestrator | Wednesday 28 May 2025 19:29:09 +0000 (0:00:00.474) 0:00:13.894 ********* 2025-05-28 19:29:25.234770 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.234778 | orchestrator | 2025-05-28 19:29:25.234787 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-28 19:29:25.234796 | orchestrator | Wednesday 28 May 2025 19:29:09 +0000 (0:00:00.128) 0:00:14.022 ********* 2025-05-28 19:29:25.234804 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234813 | orchestrator | 2025-05-28 19:29:25.234822 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-28 19:29:25.234830 | orchestrator | Wednesday 28 May 2025 19:29:10 +0000 (0:00:00.665) 0:00:14.688 ********* 2025-05-28 19:29:25.234839 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234848 | orchestrator | 2025-05-28 19:29:25.234857 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-28 19:29:25.234865 | orchestrator | Wednesday 28 May 2025 19:29:10 +0000 (0:00:00.146) 0:00:14.835 ********* 2025-05-28 19:29:25.234874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:29:25.234883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:29:25.234891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:29:25.234900 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234909 | orchestrator | 2025-05-28 19:29:25.234918 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-28 19:29:25.234927 | orchestrator | Wednesday 28 May 2025 19:29:10 +0000 (0:00:00.473) 0:00:15.308 ********* 2025-05-28 19:29:25.234935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:29:25.234944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:29:25.234953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:29:25.234962 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.234970 | orchestrator | 2025-05-28 19:29:25.234984 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-28 19:29:25.234994 | orchestrator | Wednesday 28 May 2025 19:29:11 +0000 (0:00:00.448) 0:00:15.757 ********* 2025-05-28 19:29:25.235003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:29:25.235012 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-28 19:29:25.235020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-28 19:29:25.235029 | orchestrator | 2025-05-28 19:29:25.235038 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-28 19:29:25.235047 | orchestrator | Wednesday 28 May 2025 19:29:12 +0000 (0:00:01.105) 0:00:16.863 ********* 2025-05-28 19:29:25.235055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:29:25.235064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:29:25.235073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:29:25.235082 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.235090 | orchestrator | 2025-05-28 19:29:25.235104 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-28 19:29:25.235113 | orchestrator | Wednesday 28 May 2025 19:29:12 +0000 (0:00:00.212) 0:00:17.075 ********* 2025-05-28 19:29:25.235121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-28 19:29:25.235154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-28 19:29:25.235164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-28 19:29:25.235173 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.235181 | orchestrator | 2025-05-28 19:29:25.235190 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-28 19:29:25.235199 | orchestrator | Wednesday 28 May 2025 19:29:12 +0000 (0:00:00.219) 0:00:17.294 ********* 2025-05-28 19:29:25.235215 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-28 19:29:25.235241 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-28 19:29:25.235252 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-28 19:29:25.235261 | orchestrator | 2025-05-28 19:29:25.235269 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-28 19:29:25.235278 | orchestrator | Wednesday 28 May 2025 19:29:12 +0000 (0:00:00.198) 0:00:17.493 ********* 2025-05-28 19:29:25.235287 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.235296 | orchestrator | 2025-05-28 19:29:25.235304 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-28 19:29:25.235313 | orchestrator | Wednesday 28 May 2025 19:29:13 +0000 (0:00:00.146) 0:00:17.639 ********* 2025-05-28 19:29:25.235321 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:25.235330 | orchestrator | 2025-05-28 19:29:25.235339 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-28 19:29:25.235348 | orchestrator | Wednesday 28 May 2025 19:29:13 +0000 (0:00:00.120) 0:00:17.759 ********* 2025-05-28 19:29:25.235357 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:29:25.235366 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:29:25.235375 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:29:25.235383 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-28 19:29:25.235392 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 19:29:25.235401 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 19:29:25.235409 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 19:29:25.235418 | orchestrator | 2025-05-28 19:29:25.235427 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-28 19:29:25.235435 | orchestrator | Wednesday 28 May 2025 19:29:14 +0000 (0:00:01.029) 0:00:18.789 ********* 2025-05-28 19:29:25.235444 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-28 19:29:25.235453 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-28 19:29:25.235462 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-28 19:29:25.235470 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-28 19:29:25.235479 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-28 19:29:25.235487 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-28 19:29:25.235496 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-28 19:29:25.235505 | orchestrator | 2025-05-28 19:29:25.235513 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-28 19:29:25.235522 | orchestrator | Wednesday 28 May 2025 19:29:15 +0000 (0:00:01.452) 0:00:20.241 ********* 2025-05-28 19:29:25.235531 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:25.235540 | orchestrator | 2025-05-28 19:29:25.235549 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-28 19:29:25.235557 | orchestrator | Wednesday 28 May 2025 19:29:16 +0000 (0:00:00.456) 0:00:20.697 ********* 2025-05-28 19:29:25.235566 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:29:25.235575 | orchestrator | 2025-05-28 19:29:25.235584 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-28 19:29:25.235593 | orchestrator | Wednesday 28 May 2025 19:29:16 +0000 (0:00:00.602) 0:00:21.300 ********* 2025-05-28 19:29:25.235613 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-28 19:29:25.235622 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-28 19:29:25.235631 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-28 19:29:25.235640 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-28 19:29:25.235649 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-28 19:29:25.235657 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-28 19:29:25.235666 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-28 19:29:25.235675 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-28 19:29:25.235692 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-28 19:29:25.235701 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-28 19:29:25.235710 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-28 19:29:25.235738 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-28 19:29:25.235747 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-28 19:29:25.235756 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-28 19:29:25.235765 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-28 19:29:25.235773 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-28 19:29:25.235782 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-28 19:29:25.235791 | orchestrator | 2025-05-28 19:29:25.235800 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:29:25.235809 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-28 19:29:25.235818 | orchestrator | 2025-05-28 19:29:25.235827 | orchestrator | 2025-05-28 19:29:25.235836 | orchestrator | 2025-05-28 19:29:25.235844 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:29:25.235853 | orchestrator | Wednesday 28 May 2025 19:29:22 +0000 (0:00:05.821) 0:00:27.122 ********* 2025-05-28 19:29:25.235862 | orchestrator | =============================================================================== 2025-05-28 19:29:25.235871 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.82s 2025-05-28 19:29:25.235879 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.96s 2025-05-28 19:29:25.235888 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.58s 2025-05-28 19:29:25.235897 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.45s 2025-05-28 19:29:25.235905 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.11s 2025-05-28 19:29:25.235914 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.03s 2025-05-28 19:29:25.235923 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.86s 2025-05-28 19:29:25.235932 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.85s 2025-05-28 19:29:25.235940 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.78s 2025-05-28 19:29:25.235949 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.67s 2025-05-28 19:29:25.235958 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.60s 2025-05-28 19:29:25.235967 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.60s 2025-05-28 19:29:25.235982 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.49s 2025-05-28 19:29:25.235990 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.47s 2025-05-28 19:29:25.235999 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.47s 2025-05-28 19:29:25.236008 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.46s 2025-05-28 19:29:25.236016 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.46s 2025-05-28 19:29:25.236025 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.45s 2025-05-28 19:29:25.236034 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.42s 2025-05-28 19:29:25.236042 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.31s 2025-05-28 19:29:25.236051 | orchestrator | 2025-05-28 19:29:25 | INFO  | Task dfb3a749-ce93-4b4c-a9d7-d738d83e1e76 is in state SUCCESS 2025-05-28 19:29:25.236144 | orchestrator | 2025-05-28 19:29:25 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state STARTED 2025-05-28 19:29:25.236157 | orchestrator | 2025-05-28 19:29:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:25.236166 | orchestrator | 2025-05-28 19:29:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:28.290700 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task fa6da734-35b5-4b55-a868-5c5cee1dbdac is in state SUCCESS 2025-05-28 19:29:28.293449 | orchestrator | 2025-05-28 19:29:28.293516 | orchestrator | 2025-05-28 19:29:28.293532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:29:28.293545 | orchestrator | 2025-05-28 19:29:28.293556 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:29:28.293568 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.369) 0:00:00.369 ********* 2025-05-28 19:29:28.293579 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:28.293591 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:29:28.293602 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:29:28.293613 | orchestrator | 2025-05-28 19:29:28.293623 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:29:28.293634 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.427) 0:00:00.796 ********* 2025-05-28 19:29:28.293645 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-28 19:29:28.293657 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-28 19:29:28.293865 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-28 19:29:28.293945 | orchestrator | 2025-05-28 19:29:28.293957 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-28 19:29:28.293968 | orchestrator | 2025-05-28 19:29:28.293979 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 19:29:28.293991 | orchestrator | Wednesday 28 May 2025 19:27:03 +0000 (0:00:00.385) 0:00:01.182 ********* 2025-05-28 19:29:28.294002 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:29:28.294068 | orchestrator | 2025-05-28 19:29:28.294104 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-28 19:29:28.294118 | orchestrator | Wednesday 28 May 2025 19:27:04 +0000 (0:00:00.791) 0:00:01.973 ********* 2025-05-28 19:29:28.294136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.294174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.294209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.294232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294318 | orchestrator | 2025-05-28 19:29:28.294331 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-28 19:29:28.294352 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:02.402) 0:00:04.376 ********* 2025-05-28 19:29:28.294366 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-28 19:29:28.294379 | orchestrator | 2025-05-28 19:29:28.294392 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-28 19:29:28.294403 | orchestrator | Wednesday 28 May 2025 19:27:07 +0000 (0:00:00.531) 0:00:04.907 ********* 2025-05-28 19:29:28.294414 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:28.294425 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:29:28.294436 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:29:28.294447 | orchestrator | 2025-05-28 19:29:28.294458 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-28 19:29:28.294469 | orchestrator | Wednesday 28 May 2025 19:27:08 +0000 (0:00:00.405) 0:00:05.313 ********* 2025-05-28 19:29:28.294480 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:29:28.294491 | orchestrator | 2025-05-28 19:29:28.294506 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 19:29:28.294518 | orchestrator | Wednesday 28 May 2025 19:27:08 +0000 (0:00:00.429) 0:00:05.743 ********* 2025-05-28 19:29:28.294528 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:29:28.294539 | orchestrator | 2025-05-28 19:29:28.294551 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-28 19:29:28.294567 | orchestrator | Wednesday 28 May 2025 19:27:09 +0000 (0:00:00.748) 0:00:06.492 ********* 2025-05-28 19:29:28.294580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.294593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.294618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.294636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.294750 | orchestrator | 2025-05-28 19:29:28.294763 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-28 19:29:28.294774 | orchestrator | Wednesday 28 May 2025 19:27:12 +0000 (0:00:03.548) 0:00:10.041 ********* 2025-05-28 19:29:28.294799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:29:28.294820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.294832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:29:28.294844 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.294856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:29:28.294869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.294888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:29:28.294905 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.294922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:29:28.294935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.294947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:29:28.294958 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.294969 | orchestrator | 2025-05-28 19:29:28.294981 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-28 19:29:28.294992 | orchestrator | Wednesday 28 May 2025 19:27:13 +0000 (0:00:00.817) 0:00:10.858 ********* 2025-05-28 19:29:28.295004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:29:28.295022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.295052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:29:28.295064 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.295076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:29:28.295089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.295100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:29:28.295111 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.295131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-28 19:29:28.295155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.295167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-28 19:29:28.295178 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.295189 | orchestrator | 2025-05-28 19:29:28.295201 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-28 19:29:28.295212 | orchestrator | Wednesday 28 May 2025 19:27:14 +0000 (0:00:01.066) 0:00:11.925 ********* 2025-05-28 19:29:28.295224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.295236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.295265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.295278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295360 | orchestrator | 2025-05-28 19:29:28.295371 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-28 19:29:28.295387 | orchestrator | Wednesday 28 May 2025 19:27:18 +0000 (0:00:03.410) 0:00:15.335 ********* 2025-05-28 19:29:28.295399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.295411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.295423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.295440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.295459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.295471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.295483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.295639 | orchestrator | 2025-05-28 19:29:28.295651 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-28 19:29:28.295662 | orchestrator | Wednesday 28 May 2025 19:27:23 +0000 (0:00:05.640) 0:00:20.976 ********* 2025-05-28 19:29:28.295674 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.295686 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:29:28.295697 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:29:28.295730 | orchestrator | 2025-05-28 19:29:28.295753 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-28 19:29:28.295773 | orchestrator | Wednesday 28 May 2025 19:27:25 +0000 (0:00:01.922) 0:00:22.899 ********* 2025-05-28 19:29:28.295794 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.295806 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.295817 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.295828 | orchestrator | 2025-05-28 19:29:28.295848 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-28 19:29:28.295860 | orchestrator | Wednesday 28 May 2025 19:27:26 +0000 (0:00:01.030) 0:00:23.929 ********* 2025-05-28 19:29:28.295871 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.295881 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.295892 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.295903 | orchestrator | 2025-05-28 19:29:28.295914 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-28 19:29:28.295925 | orchestrator | Wednesday 28 May 2025 19:27:27 +0000 (0:00:00.433) 0:00:24.363 ********* 2025-05-28 19:29:28.295936 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.295947 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.295958 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.295969 | orchestrator | 2025-05-28 19:29:28.295979 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-28 19:29:28.295990 | orchestrator | Wednesday 28 May 2025 19:27:27 +0000 (0:00:00.413) 0:00:24.777 ********* 2025-05-28 19:29:28.296009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.296022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.296034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.296053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.296078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.296091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-28 19:29:28.296102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.296114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.296132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.296143 | orchestrator | 2025-05-28 19:29:28.296155 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 19:29:28.296166 | orchestrator | Wednesday 28 May 2025 19:27:29 +0000 (0:00:02.435) 0:00:27.212 ********* 2025-05-28 19:29:28.296177 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.296188 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.296199 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.296210 | orchestrator | 2025-05-28 19:29:28.296221 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-28 19:29:28.296233 | orchestrator | Wednesday 28 May 2025 19:27:30 +0000 (0:00:00.443) 0:00:27.656 ********* 2025-05-28 19:29:28.296244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-28 19:29:28.296255 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-28 19:29:28.296272 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-28 19:29:28.296284 | orchestrator | 2025-05-28 19:29:28.296296 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-28 19:29:28.296315 | orchestrator | Wednesday 28 May 2025 19:27:32 +0000 (0:00:02.409) 0:00:30.065 ********* 2025-05-28 19:29:28.296333 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:29:28.296352 | orchestrator | 2025-05-28 19:29:28.296372 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-28 19:29:28.296391 | orchestrator | Wednesday 28 May 2025 19:27:33 +0000 (0:00:00.660) 0:00:30.726 ********* 2025-05-28 19:29:28.296410 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.296430 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.296449 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.296468 | orchestrator | 2025-05-28 19:29:28.296487 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-28 19:29:28.296509 | orchestrator | Wednesday 28 May 2025 19:27:34 +0000 (0:00:01.327) 0:00:32.054 ********* 2025-05-28 19:29:28.296521 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 19:29:28.296536 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 19:29:28.296556 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:29:28.296576 | orchestrator | 2025-05-28 19:29:28.296594 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-28 19:29:28.296612 | orchestrator | Wednesday 28 May 2025 19:27:35 +0000 (0:00:01.100) 0:00:33.155 ********* 2025-05-28 19:29:28.296632 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:28.296652 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:29:28.296672 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:29:28.296690 | orchestrator | 2025-05-28 19:29:28.296764 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-28 19:29:28.296778 | orchestrator | Wednesday 28 May 2025 19:27:36 +0000 (0:00:00.305) 0:00:33.460 ********* 2025-05-28 19:29:28.296789 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-28 19:29:28.296800 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-28 19:29:28.296811 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-28 19:29:28.296822 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-28 19:29:28.296834 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-28 19:29:28.296845 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-28 19:29:28.296856 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-28 19:29:28.296867 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-28 19:29:28.296878 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-28 19:29:28.296889 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-28 19:29:28.296900 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-28 19:29:28.296910 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-28 19:29:28.296921 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-28 19:29:28.296935 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-28 19:29:28.296954 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-28 19:29:28.296974 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 19:29:28.296994 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 19:29:28.297014 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 19:29:28.297033 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 19:29:28.297045 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 19:29:28.297056 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 19:29:28.297067 | orchestrator | 2025-05-28 19:29:28.297078 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-28 19:29:28.297089 | orchestrator | Wednesday 28 May 2025 19:27:46 +0000 (0:00:10.453) 0:00:43.914 ********* 2025-05-28 19:29:28.297100 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 19:29:28.297111 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 19:29:28.297121 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 19:29:28.297132 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 19:29:28.297143 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 19:29:28.297163 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 19:29:28.297175 | orchestrator | 2025-05-28 19:29:28.297186 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-28 19:29:28.297197 | orchestrator | Wednesday 28 May 2025 19:27:49 +0000 (0:00:03.123) 0:00:47.037 ********* 2025-05-28 19:29:28.297223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.297237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.297250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-28 19:29:28.297263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.297282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.297304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-28 19:29:28.297316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.297328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.297339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-28 19:29:28.297351 | orchestrator | 2025-05-28 19:29:28.297362 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 19:29:28.297374 | orchestrator | Wednesday 28 May 2025 19:27:52 +0000 (0:00:02.760) 0:00:49.797 ********* 2025-05-28 19:29:28.297385 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.297396 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.297407 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.297418 | orchestrator | 2025-05-28 19:29:28.297429 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-28 19:29:28.297440 | orchestrator | Wednesday 28 May 2025 19:27:52 +0000 (0:00:00.292) 0:00:50.090 ********* 2025-05-28 19:29:28.297451 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.297462 | orchestrator | 2025-05-28 19:29:28.297473 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-28 19:29:28.297484 | orchestrator | Wednesday 28 May 2025 19:27:55 +0000 (0:00:02.371) 0:00:52.461 ********* 2025-05-28 19:29:28.297501 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.297512 | orchestrator | 2025-05-28 19:29:28.297523 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-28 19:29:28.297534 | orchestrator | Wednesday 28 May 2025 19:27:57 +0000 (0:00:02.176) 0:00:54.637 ********* 2025-05-28 19:29:28.297545 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:29:28.297556 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:28.297567 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:29:28.297578 | orchestrator | 2025-05-28 19:29:28.297589 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-28 19:29:28.297600 | orchestrator | Wednesday 28 May 2025 19:27:58 +0000 (0:00:01.049) 0:00:55.687 ********* 2025-05-28 19:29:28.297610 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:28.297627 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:29:28.297638 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:29:28.297650 | orchestrator | 2025-05-28 19:29:28.297661 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-28 19:29:28.297672 | orchestrator | Wednesday 28 May 2025 19:27:58 +0000 (0:00:00.366) 0:00:56.053 ********* 2025-05-28 19:29:28.297683 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.297694 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.297705 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.297740 | orchestrator | 2025-05-28 19:29:28.297752 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-28 19:29:28.297763 | orchestrator | Wednesday 28 May 2025 19:27:59 +0000 (0:00:00.533) 0:00:56.586 ********* 2025-05-28 19:29:28.297774 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.297785 | orchestrator | 2025-05-28 19:29:28.297796 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-28 19:29:28.297812 | orchestrator | Wednesday 28 May 2025 19:28:12 +0000 (0:00:12.905) 0:01:09.492 ********* 2025-05-28 19:29:28.297823 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.297834 | orchestrator | 2025-05-28 19:29:28.297845 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-28 19:29:28.297856 | orchestrator | Wednesday 28 May 2025 19:28:20 +0000 (0:00:08.660) 0:01:18.153 ********* 2025-05-28 19:29:28.297867 | orchestrator | 2025-05-28 19:29:28.297878 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-28 19:29:28.297889 | orchestrator | Wednesday 28 May 2025 19:28:20 +0000 (0:00:00.063) 0:01:18.217 ********* 2025-05-28 19:29:28.297900 | orchestrator | 2025-05-28 19:29:28.297911 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-28 19:29:28.297922 | orchestrator | Wednesday 28 May 2025 19:28:20 +0000 (0:00:00.053) 0:01:18.271 ********* 2025-05-28 19:29:28.297933 | orchestrator | 2025-05-28 19:29:28.297944 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-28 19:29:28.297955 | orchestrator | Wednesday 28 May 2025 19:28:21 +0000 (0:00:00.055) 0:01:18.326 ********* 2025-05-28 19:29:28.297967 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.297978 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:29:28.297989 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:29:28.298000 | orchestrator | 2025-05-28 19:29:28.298011 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-28 19:29:28.298071 | orchestrator | Wednesday 28 May 2025 19:28:33 +0000 (0:00:12.823) 0:01:31.150 ********* 2025-05-28 19:29:28.298083 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.298094 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:29:28.298105 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:29:28.298116 | orchestrator | 2025-05-28 19:29:28.298127 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-28 19:29:28.298138 | orchestrator | Wednesday 28 May 2025 19:28:38 +0000 (0:00:04.820) 0:01:35.971 ********* 2025-05-28 19:29:28.298149 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.298160 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:29:28.298171 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:29:28.298194 | orchestrator | 2025-05-28 19:29:28.298205 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 19:29:28.298216 | orchestrator | Wednesday 28 May 2025 19:28:44 +0000 (0:00:05.461) 0:01:41.432 ********* 2025-05-28 19:29:28.298227 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:29:28.298238 | orchestrator | 2025-05-28 19:29:28.298249 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-28 19:29:28.298260 | orchestrator | Wednesday 28 May 2025 19:28:44 +0000 (0:00:00.772) 0:01:42.205 ********* 2025-05-28 19:29:28.298271 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:28.298282 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:29:28.298293 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:29:28.298304 | orchestrator | 2025-05-28 19:29:28.298315 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-28 19:29:28.298326 | orchestrator | Wednesday 28 May 2025 19:28:45 +0000 (0:00:01.060) 0:01:43.265 ********* 2025-05-28 19:29:28.298342 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:29:28.298363 | orchestrator | 2025-05-28 19:29:28.298384 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-28 19:29:28.298404 | orchestrator | Wednesday 28 May 2025 19:28:47 +0000 (0:00:01.535) 0:01:44.801 ********* 2025-05-28 19:29:28.298426 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-28 19:29:28.298448 | orchestrator | 2025-05-28 19:29:28.298467 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-28 19:29:28.298489 | orchestrator | Wednesday 28 May 2025 19:28:56 +0000 (0:00:09.255) 0:01:54.057 ********* 2025-05-28 19:29:28.298509 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-28 19:29:28.298530 | orchestrator | 2025-05-28 19:29:28.298551 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-28 19:29:28.298566 | orchestrator | Wednesday 28 May 2025 19:29:15 +0000 (0:00:18.400) 0:02:12.457 ********* 2025-05-28 19:29:28.298578 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-28 19:29:28.298589 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-28 19:29:28.298600 | orchestrator | 2025-05-28 19:29:28.298611 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-28 19:29:28.298622 | orchestrator | Wednesday 28 May 2025 19:29:21 +0000 (0:00:06.220) 0:02:18.677 ********* 2025-05-28 19:29:28.298633 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.298643 | orchestrator | 2025-05-28 19:29:28.298654 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-28 19:29:28.298665 | orchestrator | Wednesday 28 May 2025 19:29:21 +0000 (0:00:00.122) 0:02:18.799 ********* 2025-05-28 19:29:28.298676 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.298687 | orchestrator | 2025-05-28 19:29:28.298698 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-28 19:29:28.298772 | orchestrator | Wednesday 28 May 2025 19:29:21 +0000 (0:00:00.117) 0:02:18.917 ********* 2025-05-28 19:29:28.298788 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.298799 | orchestrator | 2025-05-28 19:29:28.298809 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-28 19:29:28.298820 | orchestrator | Wednesday 28 May 2025 19:29:21 +0000 (0:00:00.107) 0:02:19.024 ********* 2025-05-28 19:29:28.298831 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.298842 | orchestrator | 2025-05-28 19:29:28.298853 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-28 19:29:28.298864 | orchestrator | Wednesday 28 May 2025 19:29:22 +0000 (0:00:00.396) 0:02:19.420 ********* 2025-05-28 19:29:28.298875 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:29:28.298918 | orchestrator | 2025-05-28 19:29:28.298930 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-28 19:29:28.298950 | orchestrator | Wednesday 28 May 2025 19:29:25 +0000 (0:00:03.307) 0:02:22.727 ********* 2025-05-28 19:29:28.298967 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:29:28.298978 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:29:28.298989 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:29:28.299000 | orchestrator | 2025-05-28 19:29:28.299011 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:29:28.299022 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-28 19:29:28.299034 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-28 19:29:28.299045 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-28 19:29:28.299056 | orchestrator | 2025-05-28 19:29:28.299067 | orchestrator | 2025-05-28 19:29:28.299078 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:29:28.299089 | orchestrator | Wednesday 28 May 2025 19:29:25 +0000 (0:00:00.529) 0:02:23.256 ********* 2025-05-28 19:29:28.299100 | orchestrator | =============================================================================== 2025-05-28 19:29:28.299111 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.40s 2025-05-28 19:29:28.299122 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.91s 2025-05-28 19:29:28.299132 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 12.82s 2025-05-28 19:29:28.299143 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.45s 2025-05-28 19:29:28.299154 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.26s 2025-05-28 19:29:28.299165 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.66s 2025-05-28 19:29:28.299176 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.22s 2025-05-28 19:29:28.299187 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.64s 2025-05-28 19:29:28.299197 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.46s 2025-05-28 19:29:28.299208 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.82s 2025-05-28 19:29:28.299219 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.55s 2025-05-28 19:29:28.299230 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.41s 2025-05-28 19:29:28.299240 | orchestrator | keystone : Creating default user role ----------------------------------- 3.31s 2025-05-28 19:29:28.299249 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.12s 2025-05-28 19:29:28.299259 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.76s 2025-05-28 19:29:28.299269 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.44s 2025-05-28 19:29:28.299278 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.41s 2025-05-28 19:29:28.299288 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.40s 2025-05-28 19:29:28.299297 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.37s 2025-05-28 19:29:28.299307 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.18s 2025-05-28 19:29:28.299317 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:28.299327 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:28.299336 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:28.299346 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task 86e2c5b4-0783-4075-838a-11c278e8316c is in state SUCCESS 2025-05-28 19:29:28.299446 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:28.299821 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:28.300581 | orchestrator | 2025-05-28 19:29:28 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:28.300748 | orchestrator | 2025-05-28 19:29:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:31.339868 | orchestrator | 2025-05-28 19:29:31 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:31.339960 | orchestrator | 2025-05-28 19:29:31 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:31.339975 | orchestrator | 2025-05-28 19:29:31 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:31.339986 | orchestrator | 2025-05-28 19:29:31 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:31.339997 | orchestrator | 2025-05-28 19:29:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:31.340024 | orchestrator | 2025-05-28 19:29:31 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:31.340036 | orchestrator | 2025-05-28 19:29:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:34.401549 | orchestrator | 2025-05-28 19:29:34 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:34.401647 | orchestrator | 2025-05-28 19:29:34 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:34.401804 | orchestrator | 2025-05-28 19:29:34 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:34.404094 | orchestrator | 2025-05-28 19:29:34 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:34.404805 | orchestrator | 2025-05-28 19:29:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:34.407546 | orchestrator | 2025-05-28 19:29:34 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:34.407635 | orchestrator | 2025-05-28 19:29:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:37.466989 | orchestrator | 2025-05-28 19:29:37 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:37.468380 | orchestrator | 2025-05-28 19:29:37 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:37.470301 | orchestrator | 2025-05-28 19:29:37 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:37.471283 | orchestrator | 2025-05-28 19:29:37 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:37.473623 | orchestrator | 2025-05-28 19:29:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:37.474462 | orchestrator | 2025-05-28 19:29:37 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:37.475030 | orchestrator | 2025-05-28 19:29:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:40.529852 | orchestrator | 2025-05-28 19:29:40 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:40.531720 | orchestrator | 2025-05-28 19:29:40 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:40.533792 | orchestrator | 2025-05-28 19:29:40 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:40.534672 | orchestrator | 2025-05-28 19:29:40 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:40.536068 | orchestrator | 2025-05-28 19:29:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:40.537666 | orchestrator | 2025-05-28 19:29:40 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:40.537741 | orchestrator | 2025-05-28 19:29:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:43.578237 | orchestrator | 2025-05-28 19:29:43 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:43.581458 | orchestrator | 2025-05-28 19:29:43 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:43.583603 | orchestrator | 2025-05-28 19:29:43 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:43.585780 | orchestrator | 2025-05-28 19:29:43 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:43.587655 | orchestrator | 2025-05-28 19:29:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:43.589812 | orchestrator | 2025-05-28 19:29:43 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:43.590005 | orchestrator | 2025-05-28 19:29:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:46.637579 | orchestrator | 2025-05-28 19:29:46 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:46.638914 | orchestrator | 2025-05-28 19:29:46 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:46.640482 | orchestrator | 2025-05-28 19:29:46 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:46.645797 | orchestrator | 2025-05-28 19:29:46 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:46.649207 | orchestrator | 2025-05-28 19:29:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:46.649301 | orchestrator | 2025-05-28 19:29:46 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:46.649318 | orchestrator | 2025-05-28 19:29:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:49.694788 | orchestrator | 2025-05-28 19:29:49 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:49.696461 | orchestrator | 2025-05-28 19:29:49 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:49.697875 | orchestrator | 2025-05-28 19:29:49 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:49.700026 | orchestrator | 2025-05-28 19:29:49 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:49.700169 | orchestrator | 2025-05-28 19:29:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:49.701513 | orchestrator | 2025-05-28 19:29:49 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:49.701536 | orchestrator | 2025-05-28 19:29:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:52.750324 | orchestrator | 2025-05-28 19:29:52 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:52.750818 | orchestrator | 2025-05-28 19:29:52 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:52.754580 | orchestrator | 2025-05-28 19:29:52 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:52.756297 | orchestrator | 2025-05-28 19:29:52 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:52.757824 | orchestrator | 2025-05-28 19:29:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:52.759496 | orchestrator | 2025-05-28 19:29:52 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:52.759537 | orchestrator | 2025-05-28 19:29:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:55.802092 | orchestrator | 2025-05-28 19:29:55 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:55.802720 | orchestrator | 2025-05-28 19:29:55 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:55.803464 | orchestrator | 2025-05-28 19:29:55 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:55.804392 | orchestrator | 2025-05-28 19:29:55 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:55.805236 | orchestrator | 2025-05-28 19:29:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:55.806099 | orchestrator | 2025-05-28 19:29:55 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:55.806124 | orchestrator | 2025-05-28 19:29:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:29:58.876619 | orchestrator | 2025-05-28 19:29:58 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:29:58.877631 | orchestrator | 2025-05-28 19:29:58 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:29:58.878979 | orchestrator | 2025-05-28 19:29:58 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:29:58.880536 | orchestrator | 2025-05-28 19:29:58 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:29:58.882169 | orchestrator | 2025-05-28 19:29:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:29:58.883721 | orchestrator | 2025-05-28 19:29:58 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:29:58.883750 | orchestrator | 2025-05-28 19:29:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:01.932346 | orchestrator | 2025-05-28 19:30:01 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:01.933126 | orchestrator | 2025-05-28 19:30:01 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:01.934005 | orchestrator | 2025-05-28 19:30:01 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:01.935030 | orchestrator | 2025-05-28 19:30:01 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:01.936270 | orchestrator | 2025-05-28 19:30:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:01.937051 | orchestrator | 2025-05-28 19:30:01 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:01.937079 | orchestrator | 2025-05-28 19:30:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:05.001584 | orchestrator | 2025-05-28 19:30:05 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:05.003745 | orchestrator | 2025-05-28 19:30:05 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:05.007104 | orchestrator | 2025-05-28 19:30:05 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:05.009411 | orchestrator | 2025-05-28 19:30:05 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:05.011152 | orchestrator | 2025-05-28 19:30:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:05.013309 | orchestrator | 2025-05-28 19:30:05 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:05.013334 | orchestrator | 2025-05-28 19:30:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:08.057451 | orchestrator | 2025-05-28 19:30:08 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:08.057832 | orchestrator | 2025-05-28 19:30:08 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:08.059787 | orchestrator | 2025-05-28 19:30:08 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:08.060262 | orchestrator | 2025-05-28 19:30:08 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:08.061536 | orchestrator | 2025-05-28 19:30:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:08.063365 | orchestrator | 2025-05-28 19:30:08 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:08.063390 | orchestrator | 2025-05-28 19:30:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:11.110220 | orchestrator | 2025-05-28 19:30:11 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:11.110346 | orchestrator | 2025-05-28 19:30:11 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:11.110798 | orchestrator | 2025-05-28 19:30:11 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:11.111443 | orchestrator | 2025-05-28 19:30:11 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:11.112330 | orchestrator | 2025-05-28 19:30:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:11.113317 | orchestrator | 2025-05-28 19:30:11 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:11.113344 | orchestrator | 2025-05-28 19:30:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:14.170721 | orchestrator | 2025-05-28 19:30:14 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:14.173149 | orchestrator | 2025-05-28 19:30:14 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:14.173183 | orchestrator | 2025-05-28 19:30:14 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:14.174935 | orchestrator | 2025-05-28 19:30:14 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:14.177019 | orchestrator | 2025-05-28 19:30:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:14.177900 | orchestrator | 2025-05-28 19:30:14 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:14.177925 | orchestrator | 2025-05-28 19:30:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:17.235956 | orchestrator | 2025-05-28 19:30:17 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:17.236051 | orchestrator | 2025-05-28 19:30:17 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:17.236065 | orchestrator | 2025-05-28 19:30:17 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:17.236360 | orchestrator | 2025-05-28 19:30:17 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:17.237027 | orchestrator | 2025-05-28 19:30:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:17.239559 | orchestrator | 2025-05-28 19:30:17 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:17.239607 | orchestrator | 2025-05-28 19:30:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:20.280951 | orchestrator | 2025-05-28 19:30:20 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:20.282158 | orchestrator | 2025-05-28 19:30:20 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:20.285500 | orchestrator | 2025-05-28 19:30:20 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:20.287335 | orchestrator | 2025-05-28 19:30:20 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:20.287772 | orchestrator | 2025-05-28 19:30:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:20.288205 | orchestrator | 2025-05-28 19:30:20 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:20.288228 | orchestrator | 2025-05-28 19:30:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:23.327022 | orchestrator | 2025-05-28 19:30:23 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state STARTED 2025-05-28 19:30:23.327128 | orchestrator | 2025-05-28 19:30:23 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:23.327142 | orchestrator | 2025-05-28 19:30:23 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:23.327473 | orchestrator | 2025-05-28 19:30:23 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:23.330885 | orchestrator | 2025-05-28 19:30:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:23.336044 | orchestrator | 2025-05-28 19:30:23 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:23.336082 | orchestrator | 2025-05-28 19:30:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:26.375593 | orchestrator | 2025-05-28 19:30:26.375767 | orchestrator | 2025-05-28 19:30:26.375785 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-28 19:30:26.375799 | orchestrator | 2025-05-28 19:30:26.375811 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-28 19:30:26.375824 | orchestrator | Wednesday 28 May 2025 19:28:47 +0000 (0:00:00.171) 0:00:00.172 ********* 2025-05-28 19:30:26.375836 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-28 19:30:26.375848 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 19:30:26.375860 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 19:30:26.375872 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 19:30:26.375883 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 19:30:26.375895 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-28 19:30:26.375906 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-28 19:30:26.375918 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-28 19:30:26.375929 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-28 19:30:26.375941 | orchestrator | 2025-05-28 19:30:26.376003 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-28 19:30:26.376016 | orchestrator | Wednesday 28 May 2025 19:28:50 +0000 (0:00:03.047) 0:00:03.220 ********* 2025-05-28 19:30:26.376028 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-28 19:30:26.376039 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 19:30:26.376050 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 19:30:26.376062 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 19:30:26.376073 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-28 19:30:26.376085 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-28 19:30:26.376096 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-28 19:30:26.376108 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-28 19:30:26.376119 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-28 19:30:26.376129 | orchestrator | 2025-05-28 19:30:26.376140 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-28 19:30:26.376151 | orchestrator | Wednesday 28 May 2025 19:28:50 +0000 (0:00:00.232) 0:00:03.452 ********* 2025-05-28 19:30:26.376162 | orchestrator | ok: [testbed-manager] => { 2025-05-28 19:30:26.376176 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-28 19:30:26.376204 | orchestrator | } 2025-05-28 19:30:26.376217 | orchestrator | 2025-05-28 19:30:26.376229 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-28 19:30:26.376240 | orchestrator | Wednesday 28 May 2025 19:28:50 +0000 (0:00:00.169) 0:00:03.622 ********* 2025-05-28 19:30:26.376251 | orchestrator | changed: [testbed-manager] 2025-05-28 19:30:26.376262 | orchestrator | 2025-05-28 19:30:26.376273 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-28 19:30:26.376284 | orchestrator | Wednesday 28 May 2025 19:29:23 +0000 (0:00:32.719) 0:00:36.342 ********* 2025-05-28 19:30:26.376296 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-28 19:30:26.376307 | orchestrator | 2025-05-28 19:30:26.376319 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-28 19:30:26.376329 | orchestrator | Wednesday 28 May 2025 19:29:23 +0000 (0:00:00.469) 0:00:36.812 ********* 2025-05-28 19:30:26.376341 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-28 19:30:26.376354 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-28 19:30:26.376365 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-28 19:30:26.376377 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-28 19:30:26.376388 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-28 19:30:26.376471 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-28 19:30:26.376496 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-28 19:30:26.376508 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-28 19:30:26.376520 | orchestrator | 2025-05-28 19:30:26.376532 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-28 19:30:26.376544 | orchestrator | Wednesday 28 May 2025 19:29:26 +0000 (0:00:02.832) 0:00:39.645 ********* 2025-05-28 19:30:26.376557 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:30:26.376568 | orchestrator | 2025-05-28 19:30:26.376580 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:30:26.376592 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:30:26.376604 | orchestrator | 2025-05-28 19:30:26.376616 | orchestrator | Wednesday 28 May 2025 19:29:26 +0000 (0:00:00.023) 0:00:39.668 ********* 2025-05-28 19:30:26.376628 | orchestrator | =============================================================================== 2025-05-28 19:30:26.376666 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 32.72s 2025-05-28 19:30:26.376678 | orchestrator | Check ceph keys --------------------------------------------------------- 3.05s 2025-05-28 19:30:26.376689 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.83s 2025-05-28 19:30:26.376700 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.47s 2025-05-28 19:30:26.376711 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.23s 2025-05-28 19:30:26.376722 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.17s 2025-05-28 19:30:26.376732 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.02s 2025-05-28 19:30:26.376743 | orchestrator | 2025-05-28 19:30:26.376754 | orchestrator | 2025-05-28 19:30:26 | INFO  | Task f8d48a3d-de4d-4405-bad5-2dae7a50e07e is in state SUCCESS 2025-05-28 19:30:26.376847 | orchestrator | 2025-05-28 19:30:26 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:26.379114 | orchestrator | 2025-05-28 19:30:26 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:26.379852 | orchestrator | 2025-05-28 19:30:26 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:26.381011 | orchestrator | 2025-05-28 19:30:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:26.381729 | orchestrator | 2025-05-28 19:30:26 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:26.384007 | orchestrator | 2025-05-28 19:30:26 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:26.384034 | orchestrator | 2025-05-28 19:30:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:29.415847 | orchestrator | 2025-05-28 19:30:29 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:29.415953 | orchestrator | 2025-05-28 19:30:29 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:29.415967 | orchestrator | 2025-05-28 19:30:29 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:29.415980 | orchestrator | 2025-05-28 19:30:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:29.415991 | orchestrator | 2025-05-28 19:30:29 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:29.416028 | orchestrator | 2025-05-28 19:30:29 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:29.416040 | orchestrator | 2025-05-28 19:30:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:32.472312 | orchestrator | 2025-05-28 19:30:32 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:32.474472 | orchestrator | 2025-05-28 19:30:32 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:32.475100 | orchestrator | 2025-05-28 19:30:32 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:32.477054 | orchestrator | 2025-05-28 19:30:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:32.479564 | orchestrator | 2025-05-28 19:30:32 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:32.480079 | orchestrator | 2025-05-28 19:30:32 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:32.480108 | orchestrator | 2025-05-28 19:30:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:35.527805 | orchestrator | 2025-05-28 19:30:35 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:35.530322 | orchestrator | 2025-05-28 19:30:35 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:35.530682 | orchestrator | 2025-05-28 19:30:35 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:35.531174 | orchestrator | 2025-05-28 19:30:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:35.534338 | orchestrator | 2025-05-28 19:30:35 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:35.535024 | orchestrator | 2025-05-28 19:30:35 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:35.535119 | orchestrator | 2025-05-28 19:30:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:38.565500 | orchestrator | 2025-05-28 19:30:38 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:38.565607 | orchestrator | 2025-05-28 19:30:38 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:38.566199 | orchestrator | 2025-05-28 19:30:38 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:38.566477 | orchestrator | 2025-05-28 19:30:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:38.567065 | orchestrator | 2025-05-28 19:30:38 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:38.567552 | orchestrator | 2025-05-28 19:30:38 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:38.567574 | orchestrator | 2025-05-28 19:30:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:41.593066 | orchestrator | 2025-05-28 19:30:41 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:41.593420 | orchestrator | 2025-05-28 19:30:41 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:41.594003 | orchestrator | 2025-05-28 19:30:41 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:41.594496 | orchestrator | 2025-05-28 19:30:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:41.595685 | orchestrator | 2025-05-28 19:30:41 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:41.596136 | orchestrator | 2025-05-28 19:30:41 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:41.596208 | orchestrator | 2025-05-28 19:30:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:44.625432 | orchestrator | 2025-05-28 19:30:44 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:44.625790 | orchestrator | 2025-05-28 19:30:44 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:44.626305 | orchestrator | 2025-05-28 19:30:44 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:44.627022 | orchestrator | 2025-05-28 19:30:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:44.627805 | orchestrator | 2025-05-28 19:30:44 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:44.628217 | orchestrator | 2025-05-28 19:30:44 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:44.628239 | orchestrator | 2025-05-28 19:30:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:47.660746 | orchestrator | 2025-05-28 19:30:47 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:47.661093 | orchestrator | 2025-05-28 19:30:47 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:47.661744 | orchestrator | 2025-05-28 19:30:47 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:47.662279 | orchestrator | 2025-05-28 19:30:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:47.663161 | orchestrator | 2025-05-28 19:30:47 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:47.663635 | orchestrator | 2025-05-28 19:30:47 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:47.663740 | orchestrator | 2025-05-28 19:30:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:50.686350 | orchestrator | 2025-05-28 19:30:50 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:50.687206 | orchestrator | 2025-05-28 19:30:50 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:50.687257 | orchestrator | 2025-05-28 19:30:50 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:50.687437 | orchestrator | 2025-05-28 19:30:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:50.687766 | orchestrator | 2025-05-28 19:30:50 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:50.688455 | orchestrator | 2025-05-28 19:30:50 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:50.688481 | orchestrator | 2025-05-28 19:30:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:53.731790 | orchestrator | 2025-05-28 19:30:53 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:53.731988 | orchestrator | 2025-05-28 19:30:53 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:53.732986 | orchestrator | 2025-05-28 19:30:53 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:53.733424 | orchestrator | 2025-05-28 19:30:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:53.733861 | orchestrator | 2025-05-28 19:30:53 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:53.734480 | orchestrator | 2025-05-28 19:30:53 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:53.734506 | orchestrator | 2025-05-28 19:30:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:56.764370 | orchestrator | 2025-05-28 19:30:56 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:56.764484 | orchestrator | 2025-05-28 19:30:56 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:56.765880 | orchestrator | 2025-05-28 19:30:56 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:56.766919 | orchestrator | 2025-05-28 19:30:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:56.767492 | orchestrator | 2025-05-28 19:30:56 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state STARTED 2025-05-28 19:30:56.770006 | orchestrator | 2025-05-28 19:30:56 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state STARTED 2025-05-28 19:30:56.770179 | orchestrator | 2025-05-28 19:30:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:30:59.803687 | orchestrator | 2025-05-28 19:30:59 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:30:59.806009 | orchestrator | 2025-05-28 19:30:59 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:30:59.807660 | orchestrator | 2025-05-28 19:30:59 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:30:59.808856 | orchestrator | 2025-05-28 19:30:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:30:59.810164 | orchestrator | 2025-05-28 19:30:59 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:30:59.811240 | orchestrator | 2025-05-28 19:30:59 | INFO  | Task 1267135e-d076-4c13-a135-19162d022207 is in state SUCCESS 2025-05-28 19:30:59.812276 | orchestrator | 2025-05-28 19:30:59 | INFO  | Task 052dd337-54a6-4c32-b79a-e5ab5d3335fd is in state SUCCESS 2025-05-28 19:30:59.812843 | orchestrator | 2025-05-28 19:30:59.812873 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-28 19:30:59.812886 | orchestrator | 2025-05-28 19:30:59.812899 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-28 19:30:59.812911 | orchestrator | Wednesday 28 May 2025 19:29:29 +0000 (0:00:00.161) 0:00:00.161 ********* 2025-05-28 19:30:59.812923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-28 19:30:59.812936 | orchestrator | 2025-05-28 19:30:59.812947 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-28 19:30:59.812959 | orchestrator | Wednesday 28 May 2025 19:29:30 +0000 (0:00:00.198) 0:00:00.360 ********* 2025-05-28 19:30:59.812971 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-28 19:30:59.812982 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-28 19:30:59.812994 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-28 19:30:59.813006 | orchestrator | 2025-05-28 19:30:59.813018 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-28 19:30:59.813029 | orchestrator | Wednesday 28 May 2025 19:29:31 +0000 (0:00:01.171) 0:00:01.532 ********* 2025-05-28 19:30:59.813041 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-28 19:30:59.813053 | orchestrator | 2025-05-28 19:30:59.813064 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-28 19:30:59.813075 | orchestrator | Wednesday 28 May 2025 19:29:32 +0000 (0:00:00.969) 0:00:02.501 ********* 2025-05-28 19:30:59.813116 | orchestrator | changed: [testbed-manager] 2025-05-28 19:30:59.813129 | orchestrator | 2025-05-28 19:30:59.813140 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-28 19:30:59.813151 | orchestrator | Wednesday 28 May 2025 19:29:33 +0000 (0:00:00.796) 0:00:03.297 ********* 2025-05-28 19:30:59.813162 | orchestrator | changed: [testbed-manager] 2025-05-28 19:30:59.813173 | orchestrator | 2025-05-28 19:30:59.813184 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-28 19:30:59.813196 | orchestrator | Wednesday 28 May 2025 19:29:34 +0000 (0:00:01.014) 0:00:04.312 ********* 2025-05-28 19:30:59.813207 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-28 19:30:59.813218 | orchestrator | ok: [testbed-manager] 2025-05-28 19:30:59.813229 | orchestrator | 2025-05-28 19:30:59.813240 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-28 19:30:59.813251 | orchestrator | Wednesday 28 May 2025 19:30:14 +0000 (0:00:40.528) 0:00:44.841 ********* 2025-05-28 19:30:59.813263 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-28 19:30:59.813274 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-28 19:30:59.813286 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-28 19:30:59.813297 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-28 19:30:59.813309 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-28 19:30:59.813320 | orchestrator | 2025-05-28 19:30:59.813331 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-28 19:30:59.813342 | orchestrator | Wednesday 28 May 2025 19:30:18 +0000 (0:00:03.863) 0:00:48.704 ********* 2025-05-28 19:30:59.813353 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-28 19:30:59.813364 | orchestrator | 2025-05-28 19:30:59.813375 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-28 19:30:59.813386 | orchestrator | Wednesday 28 May 2025 19:30:19 +0000 (0:00:00.493) 0:00:49.197 ********* 2025-05-28 19:30:59.813397 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:30:59.813408 | orchestrator | 2025-05-28 19:30:59.813419 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-28 19:30:59.813430 | orchestrator | Wednesday 28 May 2025 19:30:19 +0000 (0:00:00.115) 0:00:49.313 ********* 2025-05-28 19:30:59.813441 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:30:59.813452 | orchestrator | 2025-05-28 19:30:59.813464 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-28 19:30:59.813478 | orchestrator | Wednesday 28 May 2025 19:30:19 +0000 (0:00:00.297) 0:00:49.611 ********* 2025-05-28 19:30:59.813490 | orchestrator | changed: [testbed-manager] 2025-05-28 19:30:59.813503 | orchestrator | 2025-05-28 19:30:59.813530 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-28 19:30:59.813543 | orchestrator | Wednesday 28 May 2025 19:30:20 +0000 (0:00:01.459) 0:00:51.070 ********* 2025-05-28 19:30:59.813556 | orchestrator | changed: [testbed-manager] 2025-05-28 19:30:59.813575 | orchestrator | 2025-05-28 19:30:59.813618 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-28 19:30:59.813641 | orchestrator | Wednesday 28 May 2025 19:30:21 +0000 (0:00:00.873) 0:00:51.944 ********* 2025-05-28 19:30:59.813661 | orchestrator | changed: [testbed-manager] 2025-05-28 19:30:59.813679 | orchestrator | 2025-05-28 19:30:59.813691 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-28 19:30:59.813704 | orchestrator | Wednesday 28 May 2025 19:30:22 +0000 (0:00:00.486) 0:00:52.430 ********* 2025-05-28 19:30:59.813717 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-28 19:30:59.813730 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-28 19:30:59.813742 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-28 19:30:59.813754 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-28 19:30:59.813767 | orchestrator | 2025-05-28 19:30:59.813780 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:30:59.813802 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-28 19:30:59.813817 | orchestrator | 2025-05-28 19:30:59.813843 | orchestrator | Wednesday 28 May 2025 19:30:23 +0000 (0:00:01.259) 0:00:53.690 ********* 2025-05-28 19:30:59.813856 | orchestrator | =============================================================================== 2025-05-28 19:30:59.813867 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.53s 2025-05-28 19:30:59.813878 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.86s 2025-05-28 19:30:59.813889 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.46s 2025-05-28 19:30:59.813900 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.26s 2025-05-28 19:30:59.813912 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2025-05-28 19:30:59.813922 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.01s 2025-05-28 19:30:59.813933 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.97s 2025-05-28 19:30:59.813944 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.87s 2025-05-28 19:30:59.813955 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.80s 2025-05-28 19:30:59.813966 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2025-05-28 19:30:59.813977 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.49s 2025-05-28 19:30:59.813988 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-05-28 19:30:59.813999 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-05-28 19:30:59.814010 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-05-28 19:30:59.814076 | orchestrator | 2025-05-28 19:30:59.814089 | orchestrator | 2025-05-28 19:30:59.814100 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-28 19:30:59.814111 | orchestrator | 2025-05-28 19:30:59.814123 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-28 19:30:59.814134 | orchestrator | Wednesday 28 May 2025 19:29:30 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-05-28 19:30:59.814145 | orchestrator | changed: [localhost] 2025-05-28 19:30:59.814156 | orchestrator | 2025-05-28 19:30:59.814168 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-28 19:30:59.814179 | orchestrator | Wednesday 28 May 2025 19:29:30 +0000 (0:00:00.751) 0:00:00.889 ********* 2025-05-28 19:30:59.814190 | orchestrator | changed: [localhost] 2025-05-28 19:30:59.814201 | orchestrator | 2025-05-28 19:30:59.814212 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-28 19:30:59.814223 | orchestrator | Wednesday 28 May 2025 19:30:50 +0000 (0:01:19.630) 0:01:20.519 ********* 2025-05-28 19:30:59.814234 | orchestrator | changed: [localhost] 2025-05-28 19:30:59.814245 | orchestrator | 2025-05-28 19:30:59.814256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:30:59.814267 | orchestrator | 2025-05-28 19:30:59.814279 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:30:59.814289 | orchestrator | Wednesday 28 May 2025 19:30:54 +0000 (0:00:04.372) 0:01:24.892 ********* 2025-05-28 19:30:59.814301 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:30:59.814312 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:30:59.814323 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:30:59.814334 | orchestrator | 2025-05-28 19:30:59.814345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:30:59.814357 | orchestrator | Wednesday 28 May 2025 19:30:55 +0000 (0:00:00.970) 0:01:25.862 ********* 2025-05-28 19:30:59.814368 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-28 19:30:59.814379 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-28 19:30:59.814397 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-28 19:30:59.814409 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-28 19:30:59.814420 | orchestrator | 2025-05-28 19:30:59.814431 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-28 19:30:59.814442 | orchestrator | skipping: no hosts matched 2025-05-28 19:30:59.814453 | orchestrator | 2025-05-28 19:30:59.814464 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:30:59.814475 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:30:59.814494 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:30:59.814505 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:30:59.814517 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:30:59.814528 | orchestrator | 2025-05-28 19:30:59.814539 | orchestrator | 2025-05-28 19:30:59.814550 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:30:59.814561 | orchestrator | Wednesday 28 May 2025 19:30:56 +0000 (0:00:00.898) 0:01:26.760 ********* 2025-05-28 19:30:59.814572 | orchestrator | =============================================================================== 2025-05-28 19:30:59.814583 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 79.63s 2025-05-28 19:30:59.814616 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.37s 2025-05-28 19:30:59.814637 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.97s 2025-05-28 19:30:59.814658 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-05-28 19:30:59.814677 | orchestrator | Ensure the destination directory exists --------------------------------- 0.75s 2025-05-28 19:30:59.814697 | orchestrator | 2025-05-28 19:30:59.814716 | orchestrator | 2025-05-28 19:30:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:02.865508 | orchestrator | 2025-05-28 19:31:02 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:02.865671 | orchestrator | 2025-05-28 19:31:02 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:02.866520 | orchestrator | 2025-05-28 19:31:02 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:02.867272 | orchestrator | 2025-05-28 19:31:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:02.870074 | orchestrator | 2025-05-28 19:31:02 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:02.870156 | orchestrator | 2025-05-28 19:31:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:05.905975 | orchestrator | 2025-05-28 19:31:05 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:05.906113 | orchestrator | 2025-05-28 19:31:05 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:05.906985 | orchestrator | 2025-05-28 19:31:05 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:05.908284 | orchestrator | 2025-05-28 19:31:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:05.910357 | orchestrator | 2025-05-28 19:31:05 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:05.910949 | orchestrator | 2025-05-28 19:31:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:08.952473 | orchestrator | 2025-05-28 19:31:08 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:08.952666 | orchestrator | 2025-05-28 19:31:08 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:08.952686 | orchestrator | 2025-05-28 19:31:08 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:08.953084 | orchestrator | 2025-05-28 19:31:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:08.953741 | orchestrator | 2025-05-28 19:31:08 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:08.953877 | orchestrator | 2025-05-28 19:31:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:11.988892 | orchestrator | 2025-05-28 19:31:11 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:11.989577 | orchestrator | 2025-05-28 19:31:11 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:11.991197 | orchestrator | 2025-05-28 19:31:11 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:11.993000 | orchestrator | 2025-05-28 19:31:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:11.993786 | orchestrator | 2025-05-28 19:31:11 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:11.994004 | orchestrator | 2025-05-28 19:31:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:15.030005 | orchestrator | 2025-05-28 19:31:15 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:15.030197 | orchestrator | 2025-05-28 19:31:15 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:15.030215 | orchestrator | 2025-05-28 19:31:15 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:15.033688 | orchestrator | 2025-05-28 19:31:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:15.034185 | orchestrator | 2025-05-28 19:31:15 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:15.034467 | orchestrator | 2025-05-28 19:31:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:18.069089 | orchestrator | 2025-05-28 19:31:18 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:18.069191 | orchestrator | 2025-05-28 19:31:18 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:18.069736 | orchestrator | 2025-05-28 19:31:18 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:18.071368 | orchestrator | 2025-05-28 19:31:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:18.071923 | orchestrator | 2025-05-28 19:31:18 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:18.071940 | orchestrator | 2025-05-28 19:31:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:21.113789 | orchestrator | 2025-05-28 19:31:21 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:21.113892 | orchestrator | 2025-05-28 19:31:21 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:21.113908 | orchestrator | 2025-05-28 19:31:21 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:21.114455 | orchestrator | 2025-05-28 19:31:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:21.115261 | orchestrator | 2025-05-28 19:31:21 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:21.115315 | orchestrator | 2025-05-28 19:31:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:24.159045 | orchestrator | 2025-05-28 19:31:24 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:24.159155 | orchestrator | 2025-05-28 19:31:24 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:24.159699 | orchestrator | 2025-05-28 19:31:24 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:24.163062 | orchestrator | 2025-05-28 19:31:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:24.163101 | orchestrator | 2025-05-28 19:31:24 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:24.163115 | orchestrator | 2025-05-28 19:31:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:27.198262 | orchestrator | 2025-05-28 19:31:27 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:27.198368 | orchestrator | 2025-05-28 19:31:27 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:27.199794 | orchestrator | 2025-05-28 19:31:27 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:27.200849 | orchestrator | 2025-05-28 19:31:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:27.200872 | orchestrator | 2025-05-28 19:31:27 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:27.200885 | orchestrator | 2025-05-28 19:31:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:30.233017 | orchestrator | 2025-05-28 19:31:30 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:30.235130 | orchestrator | 2025-05-28 19:31:30 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state STARTED 2025-05-28 19:31:30.235187 | orchestrator | 2025-05-28 19:31:30 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:30.235202 | orchestrator | 2025-05-28 19:31:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:30.235977 | orchestrator | 2025-05-28 19:31:30 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:30.236002 | orchestrator | 2025-05-28 19:31:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:33.301313 | orchestrator | 2025-05-28 19:31:33 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:33.301539 | orchestrator | 2025-05-28 19:31:33 | INFO  | Task c7e4c229-6282-42dc-897e-ba567e747b83 is in state SUCCESS 2025-05-28 19:31:33.303037 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-28 19:31:33.303121 | orchestrator | 2025-05-28 19:31:33.303197 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-28 19:31:33.303212 | orchestrator | 2025-05-28 19:31:33.303224 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-28 19:31:33.303236 | orchestrator | Wednesday 28 May 2025 19:30:26 +0000 (0:00:00.432) 0:00:00.432 ********* 2025-05-28 19:31:33.303247 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303260 | orchestrator | 2025-05-28 19:31:33.303271 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-28 19:31:33.303282 | orchestrator | Wednesday 28 May 2025 19:30:28 +0000 (0:00:01.910) 0:00:02.343 ********* 2025-05-28 19:31:33.303294 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303305 | orchestrator | 2025-05-28 19:31:33.303348 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-28 19:31:33.303387 | orchestrator | Wednesday 28 May 2025 19:30:29 +0000 (0:00:00.932) 0:00:03.275 ********* 2025-05-28 19:31:33.303399 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303410 | orchestrator | 2025-05-28 19:31:33.303421 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-28 19:31:33.303432 | orchestrator | Wednesday 28 May 2025 19:30:30 +0000 (0:00:01.072) 0:00:04.348 ********* 2025-05-28 19:31:33.303443 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303454 | orchestrator | 2025-05-28 19:31:33.303466 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-28 19:31:33.303477 | orchestrator | Wednesday 28 May 2025 19:30:31 +0000 (0:00:01.057) 0:00:05.405 ********* 2025-05-28 19:31:33.303488 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303499 | orchestrator | 2025-05-28 19:31:33.303510 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-28 19:31:33.303521 | orchestrator | Wednesday 28 May 2025 19:30:32 +0000 (0:00:01.038) 0:00:06.444 ********* 2025-05-28 19:31:33.303532 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303543 | orchestrator | 2025-05-28 19:31:33.303554 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-28 19:31:33.303565 | orchestrator | Wednesday 28 May 2025 19:30:33 +0000 (0:00:00.899) 0:00:07.343 ********* 2025-05-28 19:31:33.303576 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303587 | orchestrator | 2025-05-28 19:31:33.303600 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-28 19:31:33.303612 | orchestrator | Wednesday 28 May 2025 19:30:34 +0000 (0:00:01.276) 0:00:08.619 ********* 2025-05-28 19:31:33.303624 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303664 | orchestrator | 2025-05-28 19:31:33.303685 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-28 19:31:33.303706 | orchestrator | Wednesday 28 May 2025 19:30:35 +0000 (0:00:00.934) 0:00:09.554 ********* 2025-05-28 19:31:33.303726 | orchestrator | changed: [testbed-manager] 2025-05-28 19:31:33.303744 | orchestrator | 2025-05-28 19:31:33.303757 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-28 19:31:33.303768 | orchestrator | Wednesday 28 May 2025 19:30:52 +0000 (0:00:17.050) 0:00:26.604 ********* 2025-05-28 19:31:33.303780 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:31:33.303792 | orchestrator | 2025-05-28 19:31:33.303804 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-28 19:31:33.303817 | orchestrator | 2025-05-28 19:31:33.303829 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-28 19:31:33.303862 | orchestrator | Wednesday 28 May 2025 19:30:53 +0000 (0:00:00.565) 0:00:27.169 ********* 2025-05-28 19:31:33.303875 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.303887 | orchestrator | 2025-05-28 19:31:33.303899 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-28 19:31:33.303911 | orchestrator | 2025-05-28 19:31:33.303923 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-28 19:31:33.303936 | orchestrator | Wednesday 28 May 2025 19:30:55 +0000 (0:00:01.935) 0:00:29.105 ********* 2025-05-28 19:31:33.303948 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:31:33.303959 | orchestrator | 2025-05-28 19:31:33.303970 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-28 19:31:33.303981 | orchestrator | 2025-05-28 19:31:33.303991 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-28 19:31:33.304002 | orchestrator | Wednesday 28 May 2025 19:30:56 +0000 (0:00:01.591) 0:00:30.697 ********* 2025-05-28 19:31:33.304013 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:31:33.304024 | orchestrator | 2025-05-28 19:31:33.304035 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:31:33.304047 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-28 19:31:33.304069 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:31:33.304081 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:31:33.304092 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:31:33.304103 | orchestrator | 2025-05-28 19:31:33.304114 | orchestrator | 2025-05-28 19:31:33.304124 | orchestrator | 2025-05-28 19:31:33.304135 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:31:33.304146 | orchestrator | Wednesday 28 May 2025 19:30:58 +0000 (0:00:01.379) 0:00:32.076 ********* 2025-05-28 19:31:33.304166 | orchestrator | =============================================================================== 2025-05-28 19:31:33.304177 | orchestrator | Create admin user ------------------------------------------------------ 17.05s 2025-05-28 19:31:33.304202 | orchestrator | Restart ceph manager service -------------------------------------------- 4.91s 2025-05-28 19:31:33.304214 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.91s 2025-05-28 19:31:33.304225 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.28s 2025-05-28 19:31:33.304236 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2025-05-28 19:31:33.304247 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.06s 2025-05-28 19:31:33.304258 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.04s 2025-05-28 19:31:33.304269 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.93s 2025-05-28 19:31:33.304279 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-05-28 19:31:33.304294 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.90s 2025-05-28 19:31:33.304313 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.57s 2025-05-28 19:31:33.304333 | orchestrator | 2025-05-28 19:31:33.304352 | orchestrator | 2025-05-28 19:31:33.304372 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:31:33.304392 | orchestrator | 2025-05-28 19:31:33.304414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:31:33.304433 | orchestrator | Wednesday 28 May 2025 19:29:30 +0000 (0:00:00.509) 0:00:00.509 ********* 2025-05-28 19:31:33.304445 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:31:33.304457 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:31:33.304468 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:31:33.304479 | orchestrator | 2025-05-28 19:31:33.304490 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:31:33.304501 | orchestrator | Wednesday 28 May 2025 19:29:31 +0000 (0:00:00.402) 0:00:00.912 ********* 2025-05-28 19:31:33.304512 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-28 19:31:33.304523 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-28 19:31:33.304534 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-28 19:31:33.304545 | orchestrator | 2025-05-28 19:31:33.304556 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-28 19:31:33.304567 | orchestrator | 2025-05-28 19:31:33.304578 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-28 19:31:33.304588 | orchestrator | Wednesday 28 May 2025 19:29:31 +0000 (0:00:00.372) 0:00:01.284 ********* 2025-05-28 19:31:33.304599 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:31:33.304611 | orchestrator | 2025-05-28 19:31:33.304622 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-28 19:31:33.304633 | orchestrator | Wednesday 28 May 2025 19:29:32 +0000 (0:00:00.959) 0:00:02.244 ********* 2025-05-28 19:31:33.304683 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-28 19:31:33.304694 | orchestrator | 2025-05-28 19:31:33.304705 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-28 19:31:33.304715 | orchestrator | Wednesday 28 May 2025 19:29:35 +0000 (0:00:03.537) 0:00:05.781 ********* 2025-05-28 19:31:33.304726 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-28 19:31:33.304737 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-28 19:31:33.304749 | orchestrator | 2025-05-28 19:31:33.304760 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-28 19:31:33.304771 | orchestrator | Wednesday 28 May 2025 19:29:42 +0000 (0:00:06.090) 0:00:11.872 ********* 2025-05-28 19:31:33.304782 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-28 19:31:33.304793 | orchestrator | 2025-05-28 19:31:33.304804 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-28 19:31:33.304814 | orchestrator | Wednesday 28 May 2025 19:29:45 +0000 (0:00:03.349) 0:00:15.221 ********* 2025-05-28 19:31:33.304825 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:31:33.304836 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-28 19:31:33.304847 | orchestrator | 2025-05-28 19:31:33.304858 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-28 19:31:33.304869 | orchestrator | Wednesday 28 May 2025 19:29:49 +0000 (0:00:03.786) 0:00:19.008 ********* 2025-05-28 19:31:33.304880 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:31:33.304916 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-28 19:31:33.304927 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-28 19:31:33.304939 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-28 19:31:33.304950 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-28 19:31:33.304968 | orchestrator | 2025-05-28 19:31:33.304988 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-28 19:31:33.305005 | orchestrator | Wednesday 28 May 2025 19:30:04 +0000 (0:00:15.281) 0:00:34.289 ********* 2025-05-28 19:31:33.305016 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-28 19:31:33.305027 | orchestrator | 2025-05-28 19:31:33.305038 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-28 19:31:33.305048 | orchestrator | Wednesday 28 May 2025 19:30:08 +0000 (0:00:04.491) 0:00:38.780 ********* 2025-05-28 19:31:33.305081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.305101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.305123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.305187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305229 | orchestrator | 2025-05-28 19:31:33.305240 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-28 19:31:33.305251 | orchestrator | Wednesday 28 May 2025 19:30:12 +0000 (0:00:03.387) 0:00:42.168 ********* 2025-05-28 19:31:33.305262 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-28 19:31:33.305273 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-28 19:31:33.305283 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-28 19:31:33.305294 | orchestrator | 2025-05-28 19:31:33.305305 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-28 19:31:33.305316 | orchestrator | Wednesday 28 May 2025 19:30:13 +0000 (0:00:01.578) 0:00:43.747 ********* 2025-05-28 19:31:33.305327 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:31:33.305338 | orchestrator | 2025-05-28 19:31:33.305349 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-28 19:31:33.305359 | orchestrator | Wednesday 28 May 2025 19:30:14 +0000 (0:00:00.298) 0:00:44.046 ********* 2025-05-28 19:31:33.305370 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:31:33.305381 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:31:33.305392 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:31:33.305403 | orchestrator | 2025-05-28 19:31:33.305413 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-28 19:31:33.305424 | orchestrator | Wednesday 28 May 2025 19:30:15 +0000 (0:00:01.390) 0:00:45.436 ********* 2025-05-28 19:31:33.305435 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:31:33.305446 | orchestrator | 2025-05-28 19:31:33.305457 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-28 19:31:33.305482 | orchestrator | Wednesday 28 May 2025 19:30:17 +0000 (0:00:01.650) 0:00:47.086 ********* 2025-05-28 19:31:33.305508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.305538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.305551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.305564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.305697 | orchestrator | 2025-05-28 19:31:33.305709 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-28 19:31:33.305720 | orchestrator | Wednesday 28 May 2025 19:30:21 +0000 (0:00:04.639) 0:00:51.726 ********* 2025-05-28 19:31:33.305732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.305757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.305783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.305795 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:31:33.305807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.305819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.305831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.305842 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:31:33.305867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.305886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.305898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.305909 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:31:33.305920 | orchestrator | 2025-05-28 19:31:33.305931 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-28 19:31:33.305943 | orchestrator | Wednesday 28 May 2025 19:30:23 +0000 (0:00:01.263) 0:00:52.989 ********* 2025-05-28 19:31:33.305955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.305967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.305989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.306396 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:31:33.306419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.306432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.306444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.306455 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:31:33.306467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.306494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.306515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.306527 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:31:33.306538 | orchestrator | 2025-05-28 19:31:33.306549 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-28 19:31:33.306561 | orchestrator | Wednesday 28 May 2025 19:30:24 +0000 (0:00:01.370) 0:00:54.360 ********* 2025-05-28 19:31:33.306572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.306585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.306597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.306627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.306715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.306730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.306742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.306753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.306772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.306783 | orchestrator | 2025-05-28 19:31:33.306794 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-28 19:31:33.306805 | orchestrator | Wednesday 28 May 2025 19:30:29 +0000 (0:00:04.634) 0:00:58.995 ********* 2025-05-28 19:31:33.306816 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:31:33.306827 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.306838 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:31:33.306849 | orchestrator | 2025-05-28 19:31:33.306866 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-28 19:31:33.306877 | orchestrator | Wednesday 28 May 2025 19:30:32 +0000 (0:00:03.078) 0:01:02.073 ********* 2025-05-28 19:31:33.306894 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:31:33.306906 | orchestrator | 2025-05-28 19:31:33.306917 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-28 19:31:33.306926 | orchestrator | Wednesday 28 May 2025 19:30:34 +0000 (0:00:02.680) 0:01:04.753 ********* 2025-05-28 19:31:33.306936 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:31:33.306946 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:31:33.306955 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:31:33.306965 | orchestrator | 2025-05-28 19:31:33.306975 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-28 19:31:33.306985 | orchestrator | Wednesday 28 May 2025 19:30:36 +0000 (0:00:01.958) 0:01:06.711 ********* 2025-05-28 19:31:33.306995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.307008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.307026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.307044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307125 | orchestrator | 2025-05-28 19:31:33.307136 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-28 19:31:33.307148 | orchestrator | Wednesday 28 May 2025 19:30:47 +0000 (0:00:10.304) 0:01:17.016 ********* 2025-05-28 19:31:33.307171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.307182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.307193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.307203 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:31:33.307222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.307233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.307248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.307259 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:31:33.307275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-28 19:31:33.307287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.307303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:31:33.307313 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:31:33.307323 | orchestrator | 2025-05-28 19:31:33.307333 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-28 19:31:33.307342 | orchestrator | Wednesday 28 May 2025 19:30:48 +0000 (0:00:01.344) 0:01:18.361 ********* 2025-05-28 19:31:33.307353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.307373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.307385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-28 19:31:33.307402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:31:33.307478 | orchestrator | 2025-05-28 19:31:33.307488 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-28 19:31:33.307498 | orchestrator | Wednesday 28 May 2025 19:30:51 +0000 (0:00:03.472) 0:01:21.833 ********* 2025-05-28 19:31:33.307507 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:31:33.307517 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:31:33.307527 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:31:33.307537 | orchestrator | 2025-05-28 19:31:33.307547 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-28 19:31:33.307556 | orchestrator | Wednesday 28 May 2025 19:30:52 +0000 (0:00:00.247) 0:01:22.080 ********* 2025-05-28 19:31:33.307566 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.307576 | orchestrator | 2025-05-28 19:31:33.307586 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-28 19:31:33.307596 | orchestrator | Wednesday 28 May 2025 19:30:54 +0000 (0:00:02.641) 0:01:24.722 ********* 2025-05-28 19:31:33.307605 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.307615 | orchestrator | 2025-05-28 19:31:33.307625 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-28 19:31:33.307656 | orchestrator | Wednesday 28 May 2025 19:30:57 +0000 (0:00:02.559) 0:01:27.281 ********* 2025-05-28 19:31:33.307674 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.307691 | orchestrator | 2025-05-28 19:31:33.307703 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-28 19:31:33.307715 | orchestrator | Wednesday 28 May 2025 19:31:08 +0000 (0:00:11.386) 0:01:38.668 ********* 2025-05-28 19:31:33.307732 | orchestrator | 2025-05-28 19:31:33.307743 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-28 19:31:33.307753 | orchestrator | Wednesday 28 May 2025 19:31:08 +0000 (0:00:00.126) 0:01:38.795 ********* 2025-05-28 19:31:33.307762 | orchestrator | 2025-05-28 19:31:33.307772 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-28 19:31:33.307781 | orchestrator | Wednesday 28 May 2025 19:31:09 +0000 (0:00:00.397) 0:01:39.192 ********* 2025-05-28 19:31:33.307791 | orchestrator | 2025-05-28 19:31:33.307800 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-28 19:31:33.307809 | orchestrator | Wednesday 28 May 2025 19:31:09 +0000 (0:00:00.144) 0:01:39.337 ********* 2025-05-28 19:31:33.307819 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:31:33.307829 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:31:33.307838 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.307848 | orchestrator | 2025-05-28 19:31:33.307857 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-28 19:31:33.307867 | orchestrator | Wednesday 28 May 2025 19:31:18 +0000 (0:00:08.584) 0:01:47.922 ********* 2025-05-28 19:31:33.307876 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.307886 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:31:33.307896 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:31:33.307906 | orchestrator | 2025-05-28 19:31:33.307915 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-28 19:31:33.307925 | orchestrator | Wednesday 28 May 2025 19:31:24 +0000 (0:00:06.164) 0:01:54.087 ********* 2025-05-28 19:31:33.307934 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:31:33.307944 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:31:33.307953 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:31:33.307963 | orchestrator | 2025-05-28 19:31:33.307973 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:31:33.307983 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:31:33.307994 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:31:33.308015 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:31:33.308024 | orchestrator | 2025-05-28 19:31:33.308034 | orchestrator | 2025-05-28 19:31:33.308044 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:31:33.308059 | orchestrator | Wednesday 28 May 2025 19:31:30 +0000 (0:00:06.275) 0:02:00.362 ********* 2025-05-28 19:31:33.308069 | orchestrator | =============================================================================== 2025-05-28 19:31:33.308079 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.28s 2025-05-28 19:31:33.308088 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.39s 2025-05-28 19:31:33.308098 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.30s 2025-05-28 19:31:33.308108 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.58s 2025-05-28 19:31:33.308117 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.28s 2025-05-28 19:31:33.308127 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.16s 2025-05-28 19:31:33.308136 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.09s 2025-05-28 19:31:33.308146 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.64s 2025-05-28 19:31:33.308155 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.63s 2025-05-28 19:31:33.308165 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.49s 2025-05-28 19:31:33.308175 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.79s 2025-05-28 19:31:33.308184 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.54s 2025-05-28 19:31:33.308194 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.47s 2025-05-28 19:31:33.308203 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.39s 2025-05-28 19:31:33.308213 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.35s 2025-05-28 19:31:33.308222 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.08s 2025-05-28 19:31:33.308232 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.68s 2025-05-28 19:31:33.308241 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.64s 2025-05-28 19:31:33.308251 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.56s 2025-05-28 19:31:33.308260 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.96s 2025-05-28 19:31:33.308270 | orchestrator | 2025-05-28 19:31:33 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:33.308280 | orchestrator | 2025-05-28 19:31:33 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:33.308289 | orchestrator | 2025-05-28 19:31:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:33.308299 | orchestrator | 2025-05-28 19:31:33 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:33.308309 | orchestrator | 2025-05-28 19:31:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:36.335286 | orchestrator | 2025-05-28 19:31:36 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:36.335395 | orchestrator | 2025-05-28 19:31:36 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:36.335411 | orchestrator | 2025-05-28 19:31:36 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:36.335424 | orchestrator | 2025-05-28 19:31:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:36.335461 | orchestrator | 2025-05-28 19:31:36 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:36.335474 | orchestrator | 2025-05-28 19:31:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:39.369836 | orchestrator | 2025-05-28 19:31:39 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:39.369944 | orchestrator | 2025-05-28 19:31:39 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:39.369960 | orchestrator | 2025-05-28 19:31:39 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:39.370692 | orchestrator | 2025-05-28 19:31:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:39.371704 | orchestrator | 2025-05-28 19:31:39 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:39.371728 | orchestrator | 2025-05-28 19:31:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:42.409324 | orchestrator | 2025-05-28 19:31:42 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:42.409406 | orchestrator | 2025-05-28 19:31:42 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:42.409932 | orchestrator | 2025-05-28 19:31:42 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:42.410274 | orchestrator | 2025-05-28 19:31:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:42.410871 | orchestrator | 2025-05-28 19:31:42 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:42.410896 | orchestrator | 2025-05-28 19:31:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:45.447336 | orchestrator | 2025-05-28 19:31:45 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:45.451717 | orchestrator | 2025-05-28 19:31:45 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:45.452403 | orchestrator | 2025-05-28 19:31:45 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:45.453156 | orchestrator | 2025-05-28 19:31:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:45.455693 | orchestrator | 2025-05-28 19:31:45 | INFO  | Task 281d172c-afd9-4ecb-b01e-5025a771372d is in state STARTED 2025-05-28 19:31:45.456475 | orchestrator | 2025-05-28 19:31:45 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:45.456678 | orchestrator | 2025-05-28 19:31:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:48.498989 | orchestrator | 2025-05-28 19:31:48 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:48.499082 | orchestrator | 2025-05-28 19:31:48 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:48.499095 | orchestrator | 2025-05-28 19:31:48 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:48.499723 | orchestrator | 2025-05-28 19:31:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:48.499749 | orchestrator | 2025-05-28 19:31:48 | INFO  | Task 281d172c-afd9-4ecb-b01e-5025a771372d is in state STARTED 2025-05-28 19:31:48.499872 | orchestrator | 2025-05-28 19:31:48 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:48.499889 | orchestrator | 2025-05-28 19:31:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:51.527758 | orchestrator | 2025-05-28 19:31:51 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:51.529174 | orchestrator | 2025-05-28 19:31:51 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:51.530116 | orchestrator | 2025-05-28 19:31:51 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:51.530994 | orchestrator | 2025-05-28 19:31:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:51.532928 | orchestrator | 2025-05-28 19:31:51 | INFO  | Task 281d172c-afd9-4ecb-b01e-5025a771372d is in state STARTED 2025-05-28 19:31:51.533375 | orchestrator | 2025-05-28 19:31:51 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:51.533397 | orchestrator | 2025-05-28 19:31:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:54.566972 | orchestrator | 2025-05-28 19:31:54 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:54.567088 | orchestrator | 2025-05-28 19:31:54 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:54.567105 | orchestrator | 2025-05-28 19:31:54 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:54.567188 | orchestrator | 2025-05-28 19:31:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:54.567316 | orchestrator | 2025-05-28 19:31:54 | INFO  | Task 281d172c-afd9-4ecb-b01e-5025a771372d is in state SUCCESS 2025-05-28 19:31:54.567970 | orchestrator | 2025-05-28 19:31:54 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:54.567998 | orchestrator | 2025-05-28 19:31:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:31:57.602538 | orchestrator | 2025-05-28 19:31:57 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:31:57.603166 | orchestrator | 2025-05-28 19:31:57 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:31:57.603970 | orchestrator | 2025-05-28 19:31:57 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:31:57.604862 | orchestrator | 2025-05-28 19:31:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:31:57.605475 | orchestrator | 2025-05-28 19:31:57 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:31:57.606248 | orchestrator | 2025-05-28 19:31:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:00.643533 | orchestrator | 2025-05-28 19:32:00 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:00.645044 | orchestrator | 2025-05-28 19:32:00 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:00.647094 | orchestrator | 2025-05-28 19:32:00 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:00.647129 | orchestrator | 2025-05-28 19:32:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:00.647141 | orchestrator | 2025-05-28 19:32:00 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:32:00.647153 | orchestrator | 2025-05-28 19:32:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:03.685047 | orchestrator | 2025-05-28 19:32:03 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:03.685202 | orchestrator | 2025-05-28 19:32:03 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:03.685616 | orchestrator | 2025-05-28 19:32:03 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:03.686158 | orchestrator | 2025-05-28 19:32:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:03.687128 | orchestrator | 2025-05-28 19:32:03 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:32:03.687152 | orchestrator | 2025-05-28 19:32:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:06.711689 | orchestrator | 2025-05-28 19:32:06 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:06.712904 | orchestrator | 2025-05-28 19:32:06 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:06.713679 | orchestrator | 2025-05-28 19:32:06 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:06.717352 | orchestrator | 2025-05-28 19:32:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:06.718288 | orchestrator | 2025-05-28 19:32:06 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:32:06.718357 | orchestrator | 2025-05-28 19:32:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:09.758278 | orchestrator | 2025-05-28 19:32:09 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:09.759068 | orchestrator | 2025-05-28 19:32:09 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:09.761218 | orchestrator | 2025-05-28 19:32:09 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:09.761980 | orchestrator | 2025-05-28 19:32:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:09.764190 | orchestrator | 2025-05-28 19:32:09 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:32:09.764231 | orchestrator | 2025-05-28 19:32:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:12.802719 | orchestrator | 2025-05-28 19:32:12 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:12.802827 | orchestrator | 2025-05-28 19:32:12 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:12.803882 | orchestrator | 2025-05-28 19:32:12 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:12.804407 | orchestrator | 2025-05-28 19:32:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:12.805672 | orchestrator | 2025-05-28 19:32:12 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:32:12.805694 | orchestrator | 2025-05-28 19:32:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:15.840492 | orchestrator | 2025-05-28 19:32:15 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:15.840888 | orchestrator | 2025-05-28 19:32:15 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:15.841176 | orchestrator | 2025-05-28 19:32:15 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:15.841794 | orchestrator | 2025-05-28 19:32:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:15.842511 | orchestrator | 2025-05-28 19:32:15 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state STARTED 2025-05-28 19:32:15.842564 | orchestrator | 2025-05-28 19:32:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:18.890131 | orchestrator | 2025-05-28 19:32:18 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:18.891334 | orchestrator | 2025-05-28 19:32:18 | INFO  | Task 8776cb18-31b6-4bd9-9507-24dfee0c1a34 is in state STARTED 2025-05-28 19:32:18.892868 | orchestrator | 2025-05-28 19:32:18 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:18.894287 | orchestrator | 2025-05-28 19:32:18 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:18.895155 | orchestrator | 2025-05-28 19:32:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:18.898405 | orchestrator | 2025-05-28 19:32:18.898457 | orchestrator | None 2025-05-28 19:32:18.898476 | orchestrator | 2025-05-28 19:32:18.898493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:32:18.898510 | orchestrator | 2025-05-28 19:32:18.898526 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:32:18.898543 | orchestrator | Wednesday 28 May 2025 19:31:00 +0000 (0:00:00.221) 0:00:00.221 ********* 2025-05-28 19:32:18.898561 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:32:18.898579 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:32:18.898595 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:32:18.898651 | orchestrator | 2025-05-28 19:32:18.898668 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:32:18.898688 | orchestrator | Wednesday 28 May 2025 19:31:01 +0000 (0:00:00.777) 0:00:00.998 ********* 2025-05-28 19:32:18.898707 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-28 19:32:18.898725 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-28 19:32:18.898742 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-28 19:32:18.898759 | orchestrator | 2025-05-28 19:32:18.898778 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-28 19:32:18.898797 | orchestrator | 2025-05-28 19:32:18.898814 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-28 19:32:18.898834 | orchestrator | Wednesday 28 May 2025 19:31:01 +0000 (0:00:00.815) 0:00:01.814 ********* 2025-05-28 19:32:18.898852 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:32:18.898871 | orchestrator | 2025-05-28 19:32:18.898889 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-28 19:32:18.898907 | orchestrator | Wednesday 28 May 2025 19:31:02 +0000 (0:00:01.000) 0:00:02.814 ********* 2025-05-28 19:32:18.898925 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-28 19:32:18.898942 | orchestrator | 2025-05-28 19:32:18.898961 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-28 19:32:18.898980 | orchestrator | Wednesday 28 May 2025 19:31:06 +0000 (0:00:03.627) 0:00:06.441 ********* 2025-05-28 19:32:18.898998 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-28 19:32:18.899017 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-28 19:32:18.899037 | orchestrator | 2025-05-28 19:32:18.899053 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-28 19:32:18.899077 | orchestrator | Wednesday 28 May 2025 19:31:13 +0000 (0:00:06.579) 0:00:13.020 ********* 2025-05-28 19:32:18.899103 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:32:18.899120 | orchestrator | 2025-05-28 19:32:18.899138 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-28 19:32:18.899154 | orchestrator | Wednesday 28 May 2025 19:31:16 +0000 (0:00:03.242) 0:00:16.263 ********* 2025-05-28 19:32:18.899172 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:32:18.899190 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-28 19:32:18.899209 | orchestrator | 2025-05-28 19:32:18.899226 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-28 19:32:18.899256 | orchestrator | Wednesday 28 May 2025 19:31:20 +0000 (0:00:03.980) 0:00:20.243 ********* 2025-05-28 19:32:18.899268 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:32:18.899279 | orchestrator | 2025-05-28 19:32:18.899290 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-28 19:32:18.899301 | orchestrator | Wednesday 28 May 2025 19:31:23 +0000 (0:00:03.248) 0:00:23.492 ********* 2025-05-28 19:32:18.899312 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-28 19:32:18.899323 | orchestrator | 2025-05-28 19:32:18.899334 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-28 19:32:18.899344 | orchestrator | Wednesday 28 May 2025 19:31:27 +0000 (0:00:04.009) 0:00:27.502 ********* 2025-05-28 19:32:18.899355 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:18.899366 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:18.899377 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:18.899389 | orchestrator | 2025-05-28 19:32:18.899399 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-28 19:32:18.899410 | orchestrator | Wednesday 28 May 2025 19:31:27 +0000 (0:00:00.322) 0:00:27.824 ********* 2025-05-28 19:32:18.899441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.899478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.899491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.899510 | orchestrator | 2025-05-28 19:32:18.899521 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-28 19:32:18.899532 | orchestrator | Wednesday 28 May 2025 19:31:29 +0000 (0:00:01.698) 0:00:29.523 ********* 2025-05-28 19:32:18.899543 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:18.899554 | orchestrator | 2025-05-28 19:32:18.899565 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-28 19:32:18.899577 | orchestrator | Wednesday 28 May 2025 19:31:29 +0000 (0:00:00.090) 0:00:29.613 ********* 2025-05-28 19:32:18.899587 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:18.899598 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:18.899686 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:18.899699 | orchestrator | 2025-05-28 19:32:18.899710 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-28 19:32:18.899721 | orchestrator | Wednesday 28 May 2025 19:31:29 +0000 (0:00:00.229) 0:00:29.843 ********* 2025-05-28 19:32:18.899732 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:32:18.899744 | orchestrator | 2025-05-28 19:32:18.899755 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-28 19:32:18.899766 | orchestrator | Wednesday 28 May 2025 19:31:30 +0000 (0:00:00.707) 0:00:30.550 ********* 2025-05-28 19:32:18.899784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.899807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.899820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.899841 | orchestrator | 2025-05-28 19:32:18.899852 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-28 19:32:18.899864 | orchestrator | Wednesday 28 May 2025 19:31:33 +0000 (0:00:03.001) 0:00:33.552 ********* 2025-05-28 19:32:18.899875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.899887 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:18.899904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.899916 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:18.899934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.899946 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:18.899958 | orchestrator | 2025-05-28 19:32:18.899969 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-28 19:32:18.899980 | orchestrator | Wednesday 28 May 2025 19:31:34 +0000 (0:00:00.835) 0:00:34.388 ********* 2025-05-28 19:32:18.899992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.900011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.900022 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:18.900031 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:18.900046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.900056 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:18.900066 | orchestrator | 2025-05-28 19:32:18.900076 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-28 19:32:18.900085 | orchestrator | Wednesday 28 May 2025 19:31:36 +0000 (0:00:01.904) 0:00:36.293 ********* 2025-05-28 19:32:18.900103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900142 | orchestrator | 2025-05-28 19:32:18.900152 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-28 19:32:18.900162 | orchestrator | Wednesday 28 May 2025 19:31:38 +0000 (0:00:01.821) 0:00:38.114 ********* 2025-05-28 19:32:18.900177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900226 | orchestrator | 2025-05-28 19:32:18.900236 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-28 19:32:18.900245 | orchestrator | Wednesday 28 May 2025 19:31:41 +0000 (0:00:03.481) 0:00:41.596 ********* 2025-05-28 19:32:18.900255 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-28 19:32:18.900265 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-28 19:32:18.900275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-28 19:32:18.900285 | orchestrator | 2025-05-28 19:32:18.900295 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-28 19:32:18.900305 | orchestrator | Wednesday 28 May 2025 19:31:43 +0000 (0:00:01.623) 0:00:43.219 ********* 2025-05-28 19:32:18.900314 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:18.900324 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:18.900334 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:18.900344 | orchestrator | 2025-05-28 19:32:18.900354 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-28 19:32:18.900364 | orchestrator | Wednesday 28 May 2025 19:31:44 +0000 (0:00:01.591) 0:00:44.811 ********* 2025-05-28 19:32:18.900374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.900384 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:18.900399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.900410 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:18.900427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-28 19:32:18.900450 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:18.900460 | orchestrator | 2025-05-28 19:32:18.900469 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-28 19:32:18.900479 | orchestrator | Wednesday 28 May 2025 19:31:47 +0000 (0:00:02.237) 0:00:47.048 ********* 2025-05-28 19:32:18.900489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-28 19:32:18.900525 | orchestrator | 2025-05-28 19:32:18.900535 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-28 19:32:18.900554 | orchestrator | Wednesday 28 May 2025 19:31:49 +0000 (0:00:02.222) 0:00:49.270 ********* 2025-05-28 19:32:18.900564 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:18.900574 | orchestrator | 2025-05-28 19:32:18.900583 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-28 19:32:18.900593 | orchestrator | Wednesday 28 May 2025 19:31:52 +0000 (0:00:02.842) 0:00:52.113 ********* 2025-05-28 19:32:18.900603 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:18.900674 | orchestrator | 2025-05-28 19:32:18.900685 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-28 19:32:18.900694 | orchestrator | Wednesday 28 May 2025 19:31:54 +0000 (0:00:02.371) 0:00:54.484 ********* 2025-05-28 19:32:18.900711 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:18.900721 | orchestrator | 2025-05-28 19:32:18.900731 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-28 19:32:18.900741 | orchestrator | Wednesday 28 May 2025 19:32:07 +0000 (0:00:12.723) 0:01:07.208 ********* 2025-05-28 19:32:18.900751 | orchestrator | 2025-05-28 19:32:18.900761 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-28 19:32:18.900770 | orchestrator | Wednesday 28 May 2025 19:32:07 +0000 (0:00:00.069) 0:01:07.278 ********* 2025-05-28 19:32:18.900780 | orchestrator | 2025-05-28 19:32:18.900790 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-28 19:32:18.900800 | orchestrator | Wednesday 28 May 2025 19:32:07 +0000 (0:00:00.131) 0:01:07.410 ********* 2025-05-28 19:32:18.900810 | orchestrator | 2025-05-28 19:32:18.900820 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-28 19:32:18.900829 | orchestrator | Wednesday 28 May 2025 19:32:07 +0000 (0:00:00.052) 0:01:07.463 ********* 2025-05-28 19:32:18.900839 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:18.900849 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:18.900859 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:18.900869 | orchestrator | 2025-05-28 19:32:18.900879 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:32:18.900889 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:32:18.900901 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 19:32:18.900911 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-28 19:32:18.900921 | orchestrator | 2025-05-28 19:32:18.900931 | orchestrator | 2025-05-28 19:32:18.900941 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:32:18.900951 | orchestrator | Wednesday 28 May 2025 19:32:17 +0000 (0:00:09.874) 0:01:17.337 ********* 2025-05-28 19:32:18.900960 | orchestrator | =============================================================================== 2025-05-28 19:32:18.900970 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.72s 2025-05-28 19:32:18.900980 | orchestrator | placement : Restart placement-api container ----------------------------- 9.87s 2025-05-28 19:32:18.900990 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.58s 2025-05-28 19:32:18.901000 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.01s 2025-05-28 19:32:18.901010 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.98s 2025-05-28 19:32:18.901019 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.63s 2025-05-28 19:32:18.901029 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.48s 2025-05-28 19:32:18.901039 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.25s 2025-05-28 19:32:18.901049 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.24s 2025-05-28 19:32:18.901066 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 3.00s 2025-05-28 19:32:18.901075 | orchestrator | placement : Creating placement databases -------------------------------- 2.84s 2025-05-28 19:32:18.901085 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.37s 2025-05-28 19:32:18.901095 | orchestrator | placement : Copying over existing policy file --------------------------- 2.24s 2025-05-28 19:32:18.901104 | orchestrator | placement : Check placement containers ---------------------------------- 2.22s 2025-05-28 19:32:18.901114 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.90s 2025-05-28 19:32:18.901124 | orchestrator | placement : Copying over config.json files for services ----------------- 1.82s 2025-05-28 19:32:18.901134 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.70s 2025-05-28 19:32:18.901143 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.62s 2025-05-28 19:32:18.901158 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.59s 2025-05-28 19:32:18.901167 | orchestrator | placement : include_tasks ----------------------------------------------- 1.00s 2025-05-28 19:32:18.901177 | orchestrator | 2025-05-28 19:32:18 | INFO  | Task 1cbd9fd6-9ecb-41d5-be22-eca1f566efbc is in state SUCCESS 2025-05-28 19:32:18.901187 | orchestrator | 2025-05-28 19:32:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:21.945698 | orchestrator | 2025-05-28 19:32:21 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:21.945786 | orchestrator | 2025-05-28 19:32:21 | INFO  | Task 8776cb18-31b6-4bd9-9507-24dfee0c1a34 is in state SUCCESS 2025-05-28 19:32:21.947279 | orchestrator | 2025-05-28 19:32:21 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:21.949196 | orchestrator | 2025-05-28 19:32:21 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:21.951093 | orchestrator | 2025-05-28 19:32:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:21.951237 | orchestrator | 2025-05-28 19:32:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:24.988945 | orchestrator | 2025-05-28 19:32:24 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:24.989040 | orchestrator | 2025-05-28 19:32:24 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:24.989064 | orchestrator | 2025-05-28 19:32:24 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:24.989876 | orchestrator | 2025-05-28 19:32:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:24.990350 | orchestrator | 2025-05-28 19:32:24 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:24.990373 | orchestrator | 2025-05-28 19:32:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:28.021036 | orchestrator | 2025-05-28 19:32:28 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:28.021138 | orchestrator | 2025-05-28 19:32:28 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:28.021153 | orchestrator | 2025-05-28 19:32:28 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:28.021236 | orchestrator | 2025-05-28 19:32:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:28.024701 | orchestrator | 2025-05-28 19:32:28 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:28.024787 | orchestrator | 2025-05-28 19:32:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:31.061716 | orchestrator | 2025-05-28 19:32:31 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:31.062221 | orchestrator | 2025-05-28 19:32:31 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:31.062687 | orchestrator | 2025-05-28 19:32:31 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:31.063238 | orchestrator | 2025-05-28 19:32:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:31.063902 | orchestrator | 2025-05-28 19:32:31 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:31.063926 | orchestrator | 2025-05-28 19:32:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:34.121688 | orchestrator | 2025-05-28 19:32:34 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:34.121776 | orchestrator | 2025-05-28 19:32:34 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:34.121829 | orchestrator | 2025-05-28 19:32:34 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:34.122269 | orchestrator | 2025-05-28 19:32:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:34.122628 | orchestrator | 2025-05-28 19:32:34 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:34.122751 | orchestrator | 2025-05-28 19:32:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:37.161299 | orchestrator | 2025-05-28 19:32:37 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:37.162826 | orchestrator | 2025-05-28 19:32:37 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:37.163731 | orchestrator | 2025-05-28 19:32:37 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:37.164573 | orchestrator | 2025-05-28 19:32:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:37.165896 | orchestrator | 2025-05-28 19:32:37 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:37.165918 | orchestrator | 2025-05-28 19:32:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:40.216319 | orchestrator | 2025-05-28 19:32:40 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:40.219424 | orchestrator | 2025-05-28 19:32:40 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:40.219460 | orchestrator | 2025-05-28 19:32:40 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:40.219473 | orchestrator | 2025-05-28 19:32:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:40.220448 | orchestrator | 2025-05-28 19:32:40 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:40.220512 | orchestrator | 2025-05-28 19:32:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:43.270318 | orchestrator | 2025-05-28 19:32:43 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state STARTED 2025-05-28 19:32:43.270433 | orchestrator | 2025-05-28 19:32:43 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:43.270862 | orchestrator | 2025-05-28 19:32:43 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:43.271593 | orchestrator | 2025-05-28 19:32:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:43.272073 | orchestrator | 2025-05-28 19:32:43 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:43.272807 | orchestrator | 2025-05-28 19:32:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:46.328362 | orchestrator | 2025-05-28 19:32:46 | INFO  | Task dd8b1781-37a1-481c-be05-429e8d212eca is in state SUCCESS 2025-05-28 19:32:46.329491 | orchestrator | 2025-05-28 19:32:46.329532 | orchestrator | 2025-05-28 19:32:46.329545 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:32:46.329557 | orchestrator | 2025-05-28 19:32:46.329570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:32:46.329582 | orchestrator | Wednesday 28 May 2025 19:32:20 +0000 (0:00:00.239) 0:00:00.239 ********* 2025-05-28 19:32:46.329593 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:32:46.329717 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:32:46.329730 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:32:46.329741 | orchestrator | 2025-05-28 19:32:46.329752 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:32:46.329858 | orchestrator | Wednesday 28 May 2025 19:32:20 +0000 (0:00:00.332) 0:00:00.572 ********* 2025-05-28 19:32:46.329870 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-28 19:32:46.329883 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-28 19:32:46.329894 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-28 19:32:46.329905 | orchestrator | 2025-05-28 19:32:46.329916 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-28 19:32:46.329927 | orchestrator | 2025-05-28 19:32:46.329938 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-28 19:32:46.329949 | orchestrator | Wednesday 28 May 2025 19:32:20 +0000 (0:00:00.421) 0:00:00.993 ********* 2025-05-28 19:32:46.329960 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:32:46.329971 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:32:46.329982 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:32:46.329993 | orchestrator | 2025-05-28 19:32:46.330004 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:32:46.330135 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:32:46.330153 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:32:46.330166 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:32:46.330178 | orchestrator | 2025-05-28 19:32:46.330190 | orchestrator | 2025-05-28 19:32:46.330202 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:32:46.330214 | orchestrator | Wednesday 28 May 2025 19:32:21 +0000 (0:00:00.702) 0:00:01.696 ********* 2025-05-28 19:32:46.330227 | orchestrator | =============================================================================== 2025-05-28 19:32:46.330239 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.70s 2025-05-28 19:32:46.330251 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-05-28 19:32:46.330263 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-05-28 19:32:46.330297 | orchestrator | 2025-05-28 19:32:46.330310 | orchestrator | 2025-05-28 19:32:46.330323 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:32:46.330357 | orchestrator | 2025-05-28 19:32:46.330370 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:32:46.330444 | orchestrator | Wednesday 28 May 2025 19:29:30 +0000 (0:00:00.301) 0:00:00.301 ********* 2025-05-28 19:32:46.330459 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:32:46.330472 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:32:46.330493 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:32:46.330524 | orchestrator | 2025-05-28 19:32:46.330686 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:32:46.330699 | orchestrator | Wednesday 28 May 2025 19:29:30 +0000 (0:00:00.638) 0:00:00.940 ********* 2025-05-28 19:32:46.330711 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-28 19:32:46.330722 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-28 19:32:46.330733 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-28 19:32:46.330744 | orchestrator | 2025-05-28 19:32:46.330755 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-28 19:32:46.330766 | orchestrator | 2025-05-28 19:32:46.330776 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 19:32:46.330787 | orchestrator | Wednesday 28 May 2025 19:29:31 +0000 (0:00:00.430) 0:00:01.370 ********* 2025-05-28 19:32:46.330798 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:32:46.330809 | orchestrator | 2025-05-28 19:32:46.330820 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-28 19:32:46.330831 | orchestrator | Wednesday 28 May 2025 19:29:32 +0000 (0:00:00.986) 0:00:02.357 ********* 2025-05-28 19:32:46.330841 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-28 19:32:46.330852 | orchestrator | 2025-05-28 19:32:46.330939 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-28 19:32:46.330952 | orchestrator | Wednesday 28 May 2025 19:29:35 +0000 (0:00:03.589) 0:00:05.947 ********* 2025-05-28 19:32:46.330964 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-28 19:32:46.330976 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-28 19:32:46.330987 | orchestrator | 2025-05-28 19:32:46.330998 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-28 19:32:46.331009 | orchestrator | Wednesday 28 May 2025 19:29:42 +0000 (0:00:06.376) 0:00:12.323 ********* 2025-05-28 19:32:46.331020 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:32:46.331031 | orchestrator | 2025-05-28 19:32:46.331042 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-28 19:32:46.331053 | orchestrator | Wednesday 28 May 2025 19:29:45 +0000 (0:00:03.409) 0:00:15.733 ********* 2025-05-28 19:32:46.331101 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:32:46.331125 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-28 19:32:46.331137 | orchestrator | 2025-05-28 19:32:46.331148 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-28 19:32:46.331159 | orchestrator | Wednesday 28 May 2025 19:29:49 +0000 (0:00:03.695) 0:00:19.429 ********* 2025-05-28 19:32:46.331170 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:32:46.331181 | orchestrator | 2025-05-28 19:32:46.331191 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-28 19:32:46.331202 | orchestrator | Wednesday 28 May 2025 19:29:52 +0000 (0:00:03.034) 0:00:22.464 ********* 2025-05-28 19:32:46.331213 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-28 19:32:46.331224 | orchestrator | 2025-05-28 19:32:46.331235 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-28 19:32:46.331246 | orchestrator | Wednesday 28 May 2025 19:29:56 +0000 (0:00:04.057) 0:00:26.522 ********* 2025-05-28 19:32:46.331261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.331291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.331304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.331316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.331530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.331561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.331584 | orchestrator | 2025-05-28 19:32:46.331595 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-28 19:32:46.331649 | orchestrator | Wednesday 28 May 2025 19:29:59 +0000 (0:00:02.939) 0:00:29.462 ********* 2025-05-28 19:32:46.331661 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:46.331672 | orchestrator | 2025-05-28 19:32:46.331683 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-28 19:32:46.331694 | orchestrator | Wednesday 28 May 2025 19:29:59 +0000 (0:00:00.133) 0:00:29.595 ********* 2025-05-28 19:32:46.331705 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:46.331716 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:46.331727 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:46.331738 | orchestrator | 2025-05-28 19:32:46.331749 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 19:32:46.331759 | orchestrator | Wednesday 28 May 2025 19:29:59 +0000 (0:00:00.403) 0:00:29.999 ********* 2025-05-28 19:32:46.331770 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:32:46.331781 | orchestrator | 2025-05-28 19:32:46.331792 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-28 19:32:46.331803 | orchestrator | Wednesday 28 May 2025 19:30:00 +0000 (0:00:00.608) 0:00:30.607 ********* 2025-05-28 19:32:46.331815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.331834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.331854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.331866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.331907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.332190 | orchestrator | 2025-05-28 19:32:46.332201 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-28 19:32:46.332212 | orchestrator | Wednesday 28 May 2025 19:30:06 +0000 (0:00:06.193) 0:00:36.800 ********* 2025-05-28 19:32:46.332224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.332248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.332261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332311 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:46.332323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.332346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.332358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332408 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:46.332420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.332442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.332454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332505 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:46.332516 | orchestrator | 2025-05-28 19:32:46.332527 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-28 19:32:46.332538 | orchestrator | Wednesday 28 May 2025 19:30:07 +0000 (0:00:00.943) 0:00:37.743 ********* 2025-05-28 19:32:46.332550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.332573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.332585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332714 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:46.332725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.332745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.332765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.332803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.332843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332914 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:46.332929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.332946 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:46.332955 | orchestrator | 2025-05-28 19:32:46.332965 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-28 19:32:46.332975 | orchestrator | Wednesday 28 May 2025 19:30:09 +0000 (0:00:02.011) 0:00:39.755 ********* 2025-05-28 19:32:46.332985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.333002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.333013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.333023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.333196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.333226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.333246 | orchestrator | 2025-05-28 19:32:46.333255 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-28 19:32:46.333265 | orchestrator | Wednesday 28 May 2025 19:30:17 +0000 (0:00:08.039) 0:00:47.795 ********* 2025-05-28 19:32:46.333690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.333710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.333735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.333746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.333923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.333949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.333959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.333978 | orchestrator | 2025-05-28 19:32:46.333988 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-28 19:32:46.333998 | orchestrator | Wednesday 28 May 2025 19:30:43 +0000 (0:00:25.933) 0:01:13.728 ********* 2025-05-28 19:32:46.334007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-28 19:32:46.334059 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-28 19:32:46.334071 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-28 19:32:46.334081 | orchestrator | 2025-05-28 19:32:46.334091 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-28 19:32:46.334101 | orchestrator | Wednesday 28 May 2025 19:30:52 +0000 (0:00:08.531) 0:01:22.259 ********* 2025-05-28 19:32:46.334110 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-28 19:32:46.334120 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-28 19:32:46.334134 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-28 19:32:46.334144 | orchestrator | 2025-05-28 19:32:46.334153 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-28 19:32:46.334163 | orchestrator | Wednesday 28 May 2025 19:30:56 +0000 (0:00:04.796) 0:01:27.056 ********* 2025-05-28 19:32:46.334173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.334190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.334201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.334217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334452 | orchestrator | 2025-05-28 19:32:46.334462 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-28 19:32:46.334478 | orchestrator | Wednesday 28 May 2025 19:31:00 +0000 (0:00:03.581) 0:01:30.637 ********* 2025-05-28 19:32:46.334496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.334508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.334523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.334592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.334784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334794 | orchestrator | 2025-05-28 19:32:46.334804 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 19:32:46.334813 | orchestrator | Wednesday 28 May 2025 19:31:04 +0000 (0:00:03.966) 0:01:34.603 ********* 2025-05-28 19:32:46.334823 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:46.334833 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:46.334843 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:46.334852 | orchestrator | 2025-05-28 19:32:46.334862 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-28 19:32:46.334872 | orchestrator | Wednesday 28 May 2025 19:31:05 +0000 (0:00:00.576) 0:01:35.180 ********* 2025-05-28 19:32:46.334888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.334899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.334915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.334972 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:46.334986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.335002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.335017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335079 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:46.335090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-28 19:32:46.335105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-28 19:32:46.335115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335177 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:46.335186 | orchestrator | 2025-05-28 19:32:46.335196 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-28 19:32:46.335206 | orchestrator | Wednesday 28 May 2025 19:31:07 +0000 (0:00:02.214) 0:01:37.394 ********* 2025-05-28 19:32:46.335220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.335232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.335242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-28 19:32:46.335256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-28 19:32:46.335515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-28 19:32:46.335525 | orchestrator | 2025-05-28 19:32:46.335535 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-28 19:32:46.335545 | orchestrator | Wednesday 28 May 2025 19:31:12 +0000 (0:00:05.030) 0:01:42.425 ********* 2025-05-28 19:32:46.335555 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:32:46.335565 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:32:46.335574 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:32:46.335584 | orchestrator | 2025-05-28 19:32:46.335594 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-28 19:32:46.335624 | orchestrator | Wednesday 28 May 2025 19:31:12 +0000 (0:00:00.308) 0:01:42.733 ********* 2025-05-28 19:32:46.335634 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-28 19:32:46.335644 | orchestrator | 2025-05-28 19:32:46.335654 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-28 19:32:46.335664 | orchestrator | Wednesday 28 May 2025 19:31:14 +0000 (0:00:02.031) 0:01:44.764 ********* 2025-05-28 19:32:46.335679 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 19:32:46.335689 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-28 19:32:46.335699 | orchestrator | 2025-05-28 19:32:46.335709 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-28 19:32:46.335718 | orchestrator | Wednesday 28 May 2025 19:31:17 +0000 (0:00:02.349) 0:01:47.114 ********* 2025-05-28 19:32:46.335728 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.335737 | orchestrator | 2025-05-28 19:32:46.335747 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-28 19:32:46.335757 | orchestrator | Wednesday 28 May 2025 19:31:32 +0000 (0:00:14.983) 0:02:02.097 ********* 2025-05-28 19:32:46.335766 | orchestrator | 2025-05-28 19:32:46.335776 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-28 19:32:46.335786 | orchestrator | Wednesday 28 May 2025 19:31:32 +0000 (0:00:00.267) 0:02:02.365 ********* 2025-05-28 19:32:46.335795 | orchestrator | 2025-05-28 19:32:46.335805 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-28 19:32:46.335814 | orchestrator | Wednesday 28 May 2025 19:31:32 +0000 (0:00:00.079) 0:02:02.445 ********* 2025-05-28 19:32:46.335824 | orchestrator | 2025-05-28 19:32:46.335833 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-28 19:32:46.335904 | orchestrator | Wednesday 28 May 2025 19:31:32 +0000 (0:00:00.100) 0:02:02.545 ********* 2025-05-28 19:32:46.335917 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.335927 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:46.335937 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:46.335947 | orchestrator | 2025-05-28 19:32:46.335956 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-28 19:32:46.335966 | orchestrator | Wednesday 28 May 2025 19:31:46 +0000 (0:00:13.678) 0:02:16.224 ********* 2025-05-28 19:32:46.335975 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.335985 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:46.335995 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:46.336004 | orchestrator | 2025-05-28 19:32:46.336014 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-28 19:32:46.336023 | orchestrator | Wednesday 28 May 2025 19:31:59 +0000 (0:00:13.793) 0:02:30.018 ********* 2025-05-28 19:32:46.336033 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.336043 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:46.336052 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:46.336062 | orchestrator | 2025-05-28 19:32:46.336072 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-28 19:32:46.336081 | orchestrator | Wednesday 28 May 2025 19:32:12 +0000 (0:00:12.453) 0:02:42.472 ********* 2025-05-28 19:32:46.336091 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.336101 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:46.336110 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:46.336120 | orchestrator | 2025-05-28 19:32:46.336129 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-28 19:32:46.336139 | orchestrator | Wednesday 28 May 2025 19:32:24 +0000 (0:00:12.241) 0:02:54.713 ********* 2025-05-28 19:32:46.336148 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.336158 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:46.336168 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:46.336177 | orchestrator | 2025-05-28 19:32:46.336187 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-28 19:32:46.336196 | orchestrator | Wednesday 28 May 2025 19:32:31 +0000 (0:00:06.570) 0:03:01.284 ********* 2025-05-28 19:32:46.336206 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.336216 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:32:46.336225 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:32:46.336235 | orchestrator | 2025-05-28 19:32:46.336244 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-28 19:32:46.336260 | orchestrator | Wednesday 28 May 2025 19:32:38 +0000 (0:00:07.384) 0:03:08.669 ********* 2025-05-28 19:32:46.336270 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:32:46.336279 | orchestrator | 2025-05-28 19:32:46.336289 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:32:46.336305 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:32:46.336316 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:32:46.336326 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-28 19:32:46.336336 | orchestrator | 2025-05-28 19:32:46.336346 | orchestrator | 2025-05-28 19:32:46.336355 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:32:46.336365 | orchestrator | Wednesday 28 May 2025 19:32:44 +0000 (0:00:05.565) 0:03:14.234 ********* 2025-05-28 19:32:46.336375 | orchestrator | =============================================================================== 2025-05-28 19:32:46.336384 | orchestrator | designate : Copying over designate.conf -------------------------------- 25.93s 2025-05-28 19:32:46.336394 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.98s 2025-05-28 19:32:46.336404 | orchestrator | designate : Restart designate-api container ---------------------------- 13.79s 2025-05-28 19:32:46.336414 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.68s 2025-05-28 19:32:46.336423 | orchestrator | designate : Restart designate-central container ------------------------ 12.45s 2025-05-28 19:32:46.336433 | orchestrator | designate : Restart designate-producer container ----------------------- 12.24s 2025-05-28 19:32:46.336443 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.53s 2025-05-28 19:32:46.336452 | orchestrator | designate : Copying over config.json files for services ----------------- 8.04s 2025-05-28 19:32:46.336462 | orchestrator | designate : Restart designate-worker container -------------------------- 7.38s 2025-05-28 19:32:46.336471 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.57s 2025-05-28 19:32:46.336481 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.38s 2025-05-28 19:32:46.336491 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.19s 2025-05-28 19:32:46.336500 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.57s 2025-05-28 19:32:46.336510 | orchestrator | designate : Check designate containers ---------------------------------- 5.03s 2025-05-28 19:32:46.336520 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.80s 2025-05-28 19:32:46.336529 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.06s 2025-05-28 19:32:46.336539 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.97s 2025-05-28 19:32:46.336549 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.70s 2025-05-28 19:32:46.336558 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.59s 2025-05-28 19:32:46.336572 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.58s 2025-05-28 19:32:46.336582 | orchestrator | 2025-05-28 19:32:46 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:32:46.336592 | orchestrator | 2025-05-28 19:32:46 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:46.336727 | orchestrator | 2025-05-28 19:32:46 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:46.336754 | orchestrator | 2025-05-28 19:32:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:46.336774 | orchestrator | 2025-05-28 19:32:46 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:46.336784 | orchestrator | 2025-05-28 19:32:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:49.406923 | orchestrator | 2025-05-28 19:32:49 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:32:49.407026 | orchestrator | 2025-05-28 19:32:49 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:49.407549 | orchestrator | 2025-05-28 19:32:49 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:49.411653 | orchestrator | 2025-05-28 19:32:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:49.415407 | orchestrator | 2025-05-28 19:32:49 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:49.415432 | orchestrator | 2025-05-28 19:32:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:52.472816 | orchestrator | 2025-05-28 19:32:52 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:32:52.473359 | orchestrator | 2025-05-28 19:32:52 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:52.474709 | orchestrator | 2025-05-28 19:32:52 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:52.475559 | orchestrator | 2025-05-28 19:32:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:52.477022 | orchestrator | 2025-05-28 19:32:52 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:52.477066 | orchestrator | 2025-05-28 19:32:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:55.523302 | orchestrator | 2025-05-28 19:32:55 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:32:55.523816 | orchestrator | 2025-05-28 19:32:55 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:55.524391 | orchestrator | 2025-05-28 19:32:55 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:55.525924 | orchestrator | 2025-05-28 19:32:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:55.526691 | orchestrator | 2025-05-28 19:32:55 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:55.526819 | orchestrator | 2025-05-28 19:32:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:32:58.571198 | orchestrator | 2025-05-28 19:32:58 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:32:58.571302 | orchestrator | 2025-05-28 19:32:58 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:32:58.571537 | orchestrator | 2025-05-28 19:32:58 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:32:58.572432 | orchestrator | 2025-05-28 19:32:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:32:58.573113 | orchestrator | 2025-05-28 19:32:58 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:32:58.573146 | orchestrator | 2025-05-28 19:32:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:01.597659 | orchestrator | 2025-05-28 19:33:01 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:33:01.597864 | orchestrator | 2025-05-28 19:33:01 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:01.598222 | orchestrator | 2025-05-28 19:33:01 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:01.598755 | orchestrator | 2025-05-28 19:33:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:01.599227 | orchestrator | 2025-05-28 19:33:01 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:01.599266 | orchestrator | 2025-05-28 19:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:04.622876 | orchestrator | 2025-05-28 19:33:04 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:33:04.624302 | orchestrator | 2025-05-28 19:33:04 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:04.624659 | orchestrator | 2025-05-28 19:33:04 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:04.625241 | orchestrator | 2025-05-28 19:33:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:04.625721 | orchestrator | 2025-05-28 19:33:04 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:04.625744 | orchestrator | 2025-05-28 19:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:07.661881 | orchestrator | 2025-05-28 19:33:07 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:33:07.662935 | orchestrator | 2025-05-28 19:33:07 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:07.662986 | orchestrator | 2025-05-28 19:33:07 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:07.663000 | orchestrator | 2025-05-28 19:33:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:07.663207 | orchestrator | 2025-05-28 19:33:07 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:07.663226 | orchestrator | 2025-05-28 19:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:10.684139 | orchestrator | 2025-05-28 19:33:10 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:33:10.684248 | orchestrator | 2025-05-28 19:33:10 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:10.684432 | orchestrator | 2025-05-28 19:33:10 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:10.684974 | orchestrator | 2025-05-28 19:33:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:10.685399 | orchestrator | 2025-05-28 19:33:10 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:10.685431 | orchestrator | 2025-05-28 19:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:13.708145 | orchestrator | 2025-05-28 19:33:13 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:33:13.708251 | orchestrator | 2025-05-28 19:33:13 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:13.708437 | orchestrator | 2025-05-28 19:33:13 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:13.708912 | orchestrator | 2025-05-28 19:33:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:13.709421 | orchestrator | 2025-05-28 19:33:13 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:13.709445 | orchestrator | 2025-05-28 19:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:16.750382 | orchestrator | 2025-05-28 19:33:16 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:33:16.752249 | orchestrator | 2025-05-28 19:33:16 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:16.752924 | orchestrator | 2025-05-28 19:33:16 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:16.753689 | orchestrator | 2025-05-28 19:33:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:16.754655 | orchestrator | 2025-05-28 19:33:16 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:16.754685 | orchestrator | 2025-05-28 19:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:19.801411 | orchestrator | 2025-05-28 19:33:19 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state STARTED 2025-05-28 19:33:19.802909 | orchestrator | 2025-05-28 19:33:19 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:19.804434 | orchestrator | 2025-05-28 19:33:19 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:19.805793 | orchestrator | 2025-05-28 19:33:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:19.807223 | orchestrator | 2025-05-28 19:33:19 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:19.807449 | orchestrator | 2025-05-28 19:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:22.862319 | orchestrator | 2025-05-28 19:33:22 | INFO  | Task d7edd719-c26b-456f-b4c0-19185d16a4dc is in state SUCCESS 2025-05-28 19:33:22.862665 | orchestrator | 2025-05-28 19:33:22 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:22.864343 | orchestrator | 2025-05-28 19:33:22 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:22.865494 | orchestrator | 2025-05-28 19:33:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:22.866965 | orchestrator | 2025-05-28 19:33:22 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:22.866990 | orchestrator | 2025-05-28 19:33:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:25.929548 | orchestrator | 2025-05-28 19:33:25 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:25.930238 | orchestrator | 2025-05-28 19:33:25 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:25.932275 | orchestrator | 2025-05-28 19:33:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:25.933265 | orchestrator | 2025-05-28 19:33:25 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:25.933888 | orchestrator | 2025-05-28 19:33:25 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:25.933916 | orchestrator | 2025-05-28 19:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:28.980043 | orchestrator | 2025-05-28 19:33:28 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:28.981422 | orchestrator | 2025-05-28 19:33:28 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:28.983216 | orchestrator | 2025-05-28 19:33:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:28.984794 | orchestrator | 2025-05-28 19:33:28 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:28.992674 | orchestrator | 2025-05-28 19:33:28 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:28.992757 | orchestrator | 2025-05-28 19:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:32.046230 | orchestrator | 2025-05-28 19:33:32 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:32.049096 | orchestrator | 2025-05-28 19:33:32 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:32.050749 | orchestrator | 2025-05-28 19:33:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:32.051261 | orchestrator | 2025-05-28 19:33:32 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:32.051871 | orchestrator | 2025-05-28 19:33:32 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:32.053265 | orchestrator | 2025-05-28 19:33:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:35.092873 | orchestrator | 2025-05-28 19:33:35 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:35.094203 | orchestrator | 2025-05-28 19:33:35 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:35.095185 | orchestrator | 2025-05-28 19:33:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:35.095924 | orchestrator | 2025-05-28 19:33:35 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:35.096650 | orchestrator | 2025-05-28 19:33:35 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:35.096675 | orchestrator | 2025-05-28 19:33:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:38.130541 | orchestrator | 2025-05-28 19:33:38 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state STARTED 2025-05-28 19:33:38.130640 | orchestrator | 2025-05-28 19:33:38 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:38.134983 | orchestrator | 2025-05-28 19:33:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:38.135024 | orchestrator | 2025-05-28 19:33:38 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:38.135030 | orchestrator | 2025-05-28 19:33:38 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:38.135035 | orchestrator | 2025-05-28 19:33:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:41.159509 | orchestrator | 2025-05-28 19:33:41 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:33:41.163613 | orchestrator | 2025-05-28 19:33:41 | INFO  | Task 80fc62dc-9338-44db-99e4-8c159650c143 is in state SUCCESS 2025-05-28 19:33:41.163664 | orchestrator | 2025-05-28 19:33:41 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:41.163677 | orchestrator | 2025-05-28 19:33:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:41.163688 | orchestrator | 2025-05-28 19:33:41 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:41.163700 | orchestrator | 2025-05-28 19:33:41 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:41.163711 | orchestrator | 2025-05-28 19:33:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:41.164882 | orchestrator | 2025-05-28 19:33:41.165058 | orchestrator | 2025-05-28 19:33:41.165076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:33:41.165088 | orchestrator | 2025-05-28 19:33:41.165100 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:33:41.165132 | orchestrator | Wednesday 28 May 2025 19:32:48 +0000 (0:00:00.512) 0:00:00.512 ********* 2025-05-28 19:33:41.165144 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:33:41.165156 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:33:41.165167 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:33:41.165178 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:33:41.165189 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:33:41.165200 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:33:41.165211 | orchestrator | ok: [testbed-manager] 2025-05-28 19:33:41.165222 | orchestrator | 2025-05-28 19:33:41.165234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:33:41.165245 | orchestrator | Wednesday 28 May 2025 19:32:49 +0000 (0:00:01.052) 0:00:01.564 ********* 2025-05-28 19:33:41.165256 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-28 19:33:41.165268 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-28 19:33:41.165279 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-28 19:33:41.165290 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-28 19:33:41.165301 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-28 19:33:41.165312 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-28 19:33:41.165323 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-28 19:33:41.165334 | orchestrator | 2025-05-28 19:33:41.165345 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-28 19:33:41.165356 | orchestrator | 2025-05-28 19:33:41.165367 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-28 19:33:41.165379 | orchestrator | Wednesday 28 May 2025 19:32:50 +0000 (0:00:01.041) 0:00:02.606 ********* 2025-05-28 19:33:41.165410 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-05-28 19:33:41.165423 | orchestrator | 2025-05-28 19:33:41.165435 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-28 19:33:41.165446 | orchestrator | Wednesday 28 May 2025 19:32:52 +0000 (0:00:01.594) 0:00:04.200 ********* 2025-05-28 19:33:41.165459 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-28 19:33:41.165472 | orchestrator | 2025-05-28 19:33:41.165485 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-28 19:33:41.165497 | orchestrator | Wednesday 28 May 2025 19:32:55 +0000 (0:00:03.337) 0:00:07.537 ********* 2025-05-28 19:33:41.165511 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-28 19:33:41.165524 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-28 19:33:41.165536 | orchestrator | 2025-05-28 19:33:41.165548 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-28 19:33:41.165561 | orchestrator | Wednesday 28 May 2025 19:33:02 +0000 (0:00:06.904) 0:00:14.442 ********* 2025-05-28 19:33:41.165592 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:33:41.165606 | orchestrator | 2025-05-28 19:33:41.165618 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-28 19:33:41.165630 | orchestrator | Wednesday 28 May 2025 19:33:06 +0000 (0:00:04.199) 0:00:18.642 ********* 2025-05-28 19:33:41.165643 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:33:41.165655 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-28 19:33:41.165667 | orchestrator | 2025-05-28 19:33:41.165679 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-28 19:33:41.165691 | orchestrator | Wednesday 28 May 2025 19:33:10 +0000 (0:00:03.910) 0:00:22.552 ********* 2025-05-28 19:33:41.165703 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:33:41.165716 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-28 19:33:41.165736 | orchestrator | 2025-05-28 19:33:41.165749 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-28 19:33:41.165772 | orchestrator | Wednesday 28 May 2025 19:33:17 +0000 (0:00:06.475) 0:00:29.027 ********* 2025-05-28 19:33:41.165785 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-28 19:33:41.165797 | orchestrator | 2025-05-28 19:33:41.165810 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:33:41.165823 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:33:41.165839 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:33:41.165859 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:33:41.165878 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:33:41.165897 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:33:41.165935 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:33:41.165955 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:33:41.165967 | orchestrator | 2025-05-28 19:33:41.165978 | orchestrator | 2025-05-28 19:33:41.165989 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:33:41.166000 | orchestrator | Wednesday 28 May 2025 19:33:22 +0000 (0:00:05.244) 0:00:34.272 ********* 2025-05-28 19:33:41.166011 | orchestrator | =============================================================================== 2025-05-28 19:33:41.166071 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.90s 2025-05-28 19:33:41.166083 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.48s 2025-05-28 19:33:41.166094 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.24s 2025-05-28 19:33:41.166105 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.20s 2025-05-28 19:33:41.166116 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.91s 2025-05-28 19:33:41.166127 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.34s 2025-05-28 19:33:41.166138 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.59s 2025-05-28 19:33:41.166149 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2025-05-28 19:33:41.166160 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2025-05-28 19:33:41.166171 | orchestrator | 2025-05-28 19:33:41.166182 | orchestrator | 2025-05-28 19:33:41.166193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:33:41.166204 | orchestrator | 2025-05-28 19:33:41.166215 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:33:41.166226 | orchestrator | Wednesday 28 May 2025 19:31:37 +0000 (0:00:00.280) 0:00:00.280 ********* 2025-05-28 19:33:41.166238 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:33:41.166249 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:33:41.166260 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:33:41.166272 | orchestrator | 2025-05-28 19:33:41.166283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:33:41.166294 | orchestrator | Wednesday 28 May 2025 19:31:37 +0000 (0:00:00.338) 0:00:00.618 ********* 2025-05-28 19:33:41.166305 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-28 19:33:41.166316 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-28 19:33:41.166336 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-28 19:33:41.166347 | orchestrator | 2025-05-28 19:33:41.166359 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-28 19:33:41.166370 | orchestrator | 2025-05-28 19:33:41.166381 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-28 19:33:41.166391 | orchestrator | Wednesday 28 May 2025 19:31:38 +0000 (0:00:00.337) 0:00:00.956 ********* 2025-05-28 19:33:41.166402 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:33:41.166413 | orchestrator | 2025-05-28 19:33:41.166424 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-28 19:33:41.166435 | orchestrator | Wednesday 28 May 2025 19:31:39 +0000 (0:00:01.041) 0:00:01.997 ********* 2025-05-28 19:33:41.166446 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-28 19:33:41.166457 | orchestrator | 2025-05-28 19:33:41.166468 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-28 19:33:41.166479 | orchestrator | Wednesday 28 May 2025 19:31:42 +0000 (0:00:03.611) 0:00:05.609 ********* 2025-05-28 19:33:41.166490 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-28 19:33:41.166501 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-28 19:33:41.166512 | orchestrator | 2025-05-28 19:33:41.166536 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-28 19:33:41.166548 | orchestrator | Wednesday 28 May 2025 19:31:49 +0000 (0:00:06.651) 0:00:12.261 ********* 2025-05-28 19:33:41.166569 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:33:41.166608 | orchestrator | 2025-05-28 19:33:41.166636 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-28 19:33:41.166656 | orchestrator | Wednesday 28 May 2025 19:31:53 +0000 (0:00:03.840) 0:00:16.102 ********* 2025-05-28 19:33:41.166674 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:33:41.166686 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-28 19:33:41.166696 | orchestrator | 2025-05-28 19:33:41.166707 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-28 19:33:41.166718 | orchestrator | Wednesday 28 May 2025 19:31:57 +0000 (0:00:04.049) 0:00:20.151 ********* 2025-05-28 19:33:41.166729 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:33:41.166740 | orchestrator | 2025-05-28 19:33:41.166751 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-28 19:33:41.166763 | orchestrator | Wednesday 28 May 2025 19:32:00 +0000 (0:00:03.366) 0:00:23.518 ********* 2025-05-28 19:33:41.166773 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-28 19:33:41.166784 | orchestrator | 2025-05-28 19:33:41.166796 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-28 19:33:41.166807 | orchestrator | Wednesday 28 May 2025 19:32:04 +0000 (0:00:04.205) 0:00:27.723 ********* 2025-05-28 19:33:41.166817 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.166829 | orchestrator | 2025-05-28 19:33:41.166840 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-28 19:33:41.166859 | orchestrator | Wednesday 28 May 2025 19:32:08 +0000 (0:00:03.320) 0:00:31.044 ********* 2025-05-28 19:33:41.166870 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.166882 | orchestrator | 2025-05-28 19:33:41.166893 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-28 19:33:41.166904 | orchestrator | Wednesday 28 May 2025 19:32:12 +0000 (0:00:04.092) 0:00:35.136 ********* 2025-05-28 19:33:41.166915 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.166926 | orchestrator | 2025-05-28 19:33:41.166937 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-28 19:33:41.166955 | orchestrator | Wednesday 28 May 2025 19:32:15 +0000 (0:00:03.644) 0:00:38.781 ********* 2025-05-28 19:33:41.166969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.166984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.167000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.167012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.167032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.167050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.167062 | orchestrator | 2025-05-28 19:33:41.167073 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-28 19:33:41.167084 | orchestrator | Wednesday 28 May 2025 19:32:17 +0000 (0:00:01.464) 0:00:40.246 ********* 2025-05-28 19:33:41.167095 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.167107 | orchestrator | 2025-05-28 19:33:41.167117 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-28 19:33:41.167129 | orchestrator | Wednesday 28 May 2025 19:32:17 +0000 (0:00:00.087) 0:00:40.333 ********* 2025-05-28 19:33:41.167139 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.167151 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:33:41.167162 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:33:41.167173 | orchestrator | 2025-05-28 19:33:41.167183 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-28 19:33:41.167195 | orchestrator | Wednesday 28 May 2025 19:32:17 +0000 (0:00:00.281) 0:00:40.615 ********* 2025-05-28 19:33:41.167206 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:33:41.167217 | orchestrator | 2025-05-28 19:33:41.167228 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-28 19:33:41.167239 | orchestrator | Wednesday 28 May 2025 19:32:18 +0000 (0:00:00.441) 0:00:41.056 ********* 2025-05-28 19:33:41.167255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167285 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.167305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167329 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:33:41.167341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167369 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:33:41.167381 | orchestrator | 2025-05-28 19:33:41.167392 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-28 19:33:41.167403 | orchestrator | Wednesday 28 May 2025 19:32:18 +0000 (0:00:00.630) 0:00:41.687 ********* 2025-05-28 19:33:41.167414 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.167425 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:33:41.167441 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:33:41.167452 | orchestrator | 2025-05-28 19:33:41.167463 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-28 19:33:41.167474 | orchestrator | Wednesday 28 May 2025 19:32:19 +0000 (0:00:00.246) 0:00:41.934 ********* 2025-05-28 19:33:41.167485 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:33:41.167496 | orchestrator | 2025-05-28 19:33:41.167507 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-28 19:33:41.167519 | orchestrator | Wednesday 28 May 2025 19:32:19 +0000 (0:00:00.508) 0:00:42.443 ********* 2025-05-28 19:33:41.167537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.167549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.167561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.167604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.167624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.167643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.167655 | orchestrator | 2025-05-28 19:33:41.167667 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-28 19:33:41.167678 | orchestrator | Wednesday 28 May 2025 19:32:21 +0000 (0:00:02.279) 0:00:44.722 ********* 2025-05-28 19:33:41.167690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167712 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.167728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167763 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:33:41.167775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167798 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:33:41.167809 | orchestrator | 2025-05-28 19:33:41.167820 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-28 19:33:41.167832 | orchestrator | Wednesday 28 May 2025 19:32:22 +0000 (0:00:00.907) 0:00:45.630 ********* 2025-05-28 19:33:41.167843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167876 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.167895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167919 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:33:41.167930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.167942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.167959 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:33:41.167970 | orchestrator | 2025-05-28 19:33:41.167985 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-28 19:33:41.167996 | orchestrator | Wednesday 28 May 2025 19:32:23 +0000 (0:00:01.171) 0:00:46.801 ********* 2025-05-28 19:33:41.168008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168095 | orchestrator | 2025-05-28 19:33:41.168111 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-28 19:33:41.168123 | orchestrator | Wednesday 28 May 2025 19:32:26 +0000 (0:00:02.832) 0:00:49.634 ********* 2025-05-28 19:33:41.168134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168220 | orchestrator | 2025-05-28 19:33:41.168231 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-28 19:33:41.168242 | orchestrator | Wednesday 28 May 2025 19:32:33 +0000 (0:00:06.779) 0:00:56.414 ********* 2025-05-28 19:33:41.168254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.168271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.168282 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.168298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.168317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.168329 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:33:41.168340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-28 19:33:41.168352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:33:41.168369 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:33:41.168383 | orchestrator | 2025-05-28 19:33:41.168402 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-28 19:33:41.168414 | orchestrator | Wednesday 28 May 2025 19:32:34 +0000 (0:00:00.987) 0:00:57.401 ********* 2025-05-28 19:33:41.168433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-28 19:33:41.168476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:33:41.168521 | orchestrator | 2025-05-28 19:33:41.168533 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-28 19:33:41.168544 | orchestrator | Wednesday 28 May 2025 19:32:36 +0000 (0:00:02.143) 0:00:59.544 ********* 2025-05-28 19:33:41.168555 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:33:41.168566 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:33:41.168597 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:33:41.168608 | orchestrator | 2025-05-28 19:33:41.168619 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-28 19:33:41.168630 | orchestrator | Wednesday 28 May 2025 19:32:36 +0000 (0:00:00.221) 0:00:59.766 ********* 2025-05-28 19:33:41.168641 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.168652 | orchestrator | 2025-05-28 19:33:41.168663 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-28 19:33:41.168674 | orchestrator | Wednesday 28 May 2025 19:32:39 +0000 (0:00:02.687) 0:01:02.453 ********* 2025-05-28 19:33:41.168685 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.168696 | orchestrator | 2025-05-28 19:33:41.168707 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-28 19:33:41.168718 | orchestrator | Wednesday 28 May 2025 19:32:41 +0000 (0:00:02.398) 0:01:04.852 ********* 2025-05-28 19:33:41.168729 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.168740 | orchestrator | 2025-05-28 19:33:41.168758 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-28 19:33:41.168769 | orchestrator | Wednesday 28 May 2025 19:33:00 +0000 (0:00:18.937) 0:01:23.789 ********* 2025-05-28 19:33:41.168780 | orchestrator | 2025-05-28 19:33:41.168791 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-28 19:33:41.168802 | orchestrator | Wednesday 28 May 2025 19:33:00 +0000 (0:00:00.045) 0:01:23.834 ********* 2025-05-28 19:33:41.168813 | orchestrator | 2025-05-28 19:33:41.168824 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-28 19:33:41.168841 | orchestrator | Wednesday 28 May 2025 19:33:01 +0000 (0:00:00.145) 0:01:23.980 ********* 2025-05-28 19:33:41.168852 | orchestrator | 2025-05-28 19:33:41.168862 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-28 19:33:41.168873 | orchestrator | Wednesday 28 May 2025 19:33:01 +0000 (0:00:00.122) 0:01:24.102 ********* 2025-05-28 19:33:41.168884 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.168896 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:33:41.168907 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:33:41.168918 | orchestrator | 2025-05-28 19:33:41.168929 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-28 19:33:41.168940 | orchestrator | Wednesday 28 May 2025 19:33:24 +0000 (0:00:23.211) 0:01:47.314 ********* 2025-05-28 19:33:41.168951 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:33:41.168961 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:33:41.168972 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:33:41.168983 | orchestrator | 2025-05-28 19:33:41.168994 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:33:41.169006 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 19:33:41.169017 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:33:41.169028 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:33:41.169039 | orchestrator | 2025-05-28 19:33:41.169050 | orchestrator | 2025-05-28 19:33:41.169061 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:33:41.169072 | orchestrator | Wednesday 28 May 2025 19:33:39 +0000 (0:00:14.618) 0:02:01.932 ********* 2025-05-28 19:33:41.169083 | orchestrator | =============================================================================== 2025-05-28 19:33:41.169093 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 23.21s 2025-05-28 19:33:41.169104 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.94s 2025-05-28 19:33:41.169115 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.62s 2025-05-28 19:33:41.169126 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.78s 2025-05-28 19:33:41.169137 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.65s 2025-05-28 19:33:41.169148 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.21s 2025-05-28 19:33:41.169159 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.09s 2025-05-28 19:33:41.169170 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.05s 2025-05-28 19:33:41.169180 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.84s 2025-05-28 19:33:41.169191 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.64s 2025-05-28 19:33:41.169202 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.61s 2025-05-28 19:33:41.169213 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.37s 2025-05-28 19:33:41.169224 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.32s 2025-05-28 19:33:41.169234 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.83s 2025-05-28 19:33:41.169245 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.69s 2025-05-28 19:33:41.169260 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.40s 2025-05-28 19:33:41.169271 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.28s 2025-05-28 19:33:41.169282 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.14s 2025-05-28 19:33:41.169298 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.46s 2025-05-28 19:33:41.169309 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.17s 2025-05-28 19:33:44.197481 | orchestrator | 2025-05-28 19:33:44 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:33:44.198692 | orchestrator | 2025-05-28 19:33:44 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:44.200266 | orchestrator | 2025-05-28 19:33:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:44.201483 | orchestrator | 2025-05-28 19:33:44 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:44.202750 | orchestrator | 2025-05-28 19:33:44 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:44.202774 | orchestrator | 2025-05-28 19:33:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:47.257028 | orchestrator | 2025-05-28 19:33:47 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:33:47.258679 | orchestrator | 2025-05-28 19:33:47 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:47.260878 | orchestrator | 2025-05-28 19:33:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:47.262229 | orchestrator | 2025-05-28 19:33:47 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:47.264558 | orchestrator | 2025-05-28 19:33:47 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:47.264625 | orchestrator | 2025-05-28 19:33:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:50.332949 | orchestrator | 2025-05-28 19:33:50 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:33:50.333056 | orchestrator | 2025-05-28 19:33:50 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:50.338871 | orchestrator | 2025-05-28 19:33:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:50.341419 | orchestrator | 2025-05-28 19:33:50 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:50.342714 | orchestrator | 2025-05-28 19:33:50 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:50.342806 | orchestrator | 2025-05-28 19:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:53.382723 | orchestrator | 2025-05-28 19:33:53 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:33:53.383882 | orchestrator | 2025-05-28 19:33:53 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:53.383919 | orchestrator | 2025-05-28 19:33:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:53.384673 | orchestrator | 2025-05-28 19:33:53 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:53.388719 | orchestrator | 2025-05-28 19:33:53 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:53.388756 | orchestrator | 2025-05-28 19:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:56.428157 | orchestrator | 2025-05-28 19:33:56 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:33:56.428266 | orchestrator | 2025-05-28 19:33:56 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:56.428635 | orchestrator | 2025-05-28 19:33:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:56.429749 | orchestrator | 2025-05-28 19:33:56 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:56.431161 | orchestrator | 2025-05-28 19:33:56 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:56.431210 | orchestrator | 2025-05-28 19:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:33:59.467900 | orchestrator | 2025-05-28 19:33:59 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:33:59.468005 | orchestrator | 2025-05-28 19:33:59 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:33:59.468415 | orchestrator | 2025-05-28 19:33:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:33:59.469040 | orchestrator | 2025-05-28 19:33:59 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:33:59.469551 | orchestrator | 2025-05-28 19:33:59 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:33:59.469632 | orchestrator | 2025-05-28 19:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:02.496411 | orchestrator | 2025-05-28 19:34:02 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:02.497738 | orchestrator | 2025-05-28 19:34:02 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:34:02.498937 | orchestrator | 2025-05-28 19:34:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:02.498977 | orchestrator | 2025-05-28 19:34:02 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:02.499493 | orchestrator | 2025-05-28 19:34:02 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:02.499531 | orchestrator | 2025-05-28 19:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:05.528666 | orchestrator | 2025-05-28 19:34:05 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:05.528879 | orchestrator | 2025-05-28 19:34:05 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:34:05.529466 | orchestrator | 2025-05-28 19:34:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:05.531242 | orchestrator | 2025-05-28 19:34:05 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:05.531670 | orchestrator | 2025-05-28 19:34:05 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:05.531735 | orchestrator | 2025-05-28 19:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:08.564862 | orchestrator | 2025-05-28 19:34:08 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:08.564982 | orchestrator | 2025-05-28 19:34:08 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:34:08.565483 | orchestrator | 2025-05-28 19:34:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:08.566139 | orchestrator | 2025-05-28 19:34:08 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:08.566829 | orchestrator | 2025-05-28 19:34:08 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:08.566858 | orchestrator | 2025-05-28 19:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:11.615483 | orchestrator | 2025-05-28 19:34:11 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:11.615917 | orchestrator | 2025-05-28 19:34:11 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:34:11.617857 | orchestrator | 2025-05-28 19:34:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:11.619088 | orchestrator | 2025-05-28 19:34:11 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:11.620299 | orchestrator | 2025-05-28 19:34:11 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:11.620323 | orchestrator | 2025-05-28 19:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:14.654982 | orchestrator | 2025-05-28 19:34:14 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:14.655086 | orchestrator | 2025-05-28 19:34:14 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state STARTED 2025-05-28 19:34:14.655538 | orchestrator | 2025-05-28 19:34:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:14.656374 | orchestrator | 2025-05-28 19:34:14 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:14.656828 | orchestrator | 2025-05-28 19:34:14 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:14.656971 | orchestrator | 2025-05-28 19:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:17.687931 | orchestrator | 2025-05-28 19:34:17 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:17.693388 | orchestrator | 2025-05-28 19:34:17 | INFO  | Task 72f9233a-39ef-4cfd-a5ca-f7da9cc4fd0d is in state SUCCESS 2025-05-28 19:34:17.695261 | orchestrator | 2025-05-28 19:34:17.695301 | orchestrator | 2025-05-28 19:34:17.695315 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:34:17.695327 | orchestrator | 2025-05-28 19:34:17.695339 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:34:17.695350 | orchestrator | Wednesday 28 May 2025 19:29:30 +0000 (0:00:00.496) 0:00:00.496 ********* 2025-05-28 19:34:17.695362 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:34:17.695415 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:34:17.695428 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:34:17.695440 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:34:17.695450 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:34:17.695490 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:34:17.695502 | orchestrator | 2025-05-28 19:34:17.695513 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:34:17.695524 | orchestrator | Wednesday 28 May 2025 19:29:31 +0000 (0:00:01.259) 0:00:01.755 ********* 2025-05-28 19:34:17.695536 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-28 19:34:17.695568 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-28 19:34:17.695580 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-28 19:34:17.695591 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-28 19:34:17.695602 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-28 19:34:17.695613 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-28 19:34:17.695624 | orchestrator | 2025-05-28 19:34:17.695636 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-28 19:34:17.695647 | orchestrator | 2025-05-28 19:34:17.695658 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 19:34:17.695669 | orchestrator | Wednesday 28 May 2025 19:29:32 +0000 (0:00:00.828) 0:00:02.584 ********* 2025-05-28 19:34:17.695681 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:34:17.695694 | orchestrator | 2025-05-28 19:34:17.695729 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-28 19:34:17.695743 | orchestrator | Wednesday 28 May 2025 19:29:34 +0000 (0:00:01.574) 0:00:04.158 ********* 2025-05-28 19:34:17.695755 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:34:17.695768 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:34:17.695780 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:34:17.695792 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:34:17.695805 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:34:17.695817 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:34:17.695829 | orchestrator | 2025-05-28 19:34:17.695841 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-28 19:34:17.695854 | orchestrator | Wednesday 28 May 2025 19:29:35 +0000 (0:00:01.383) 0:00:05.542 ********* 2025-05-28 19:34:17.695866 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:34:17.695878 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:34:17.695891 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:34:17.695903 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:34:17.695916 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:34:17.695927 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:34:17.695938 | orchestrator | 2025-05-28 19:34:17.695949 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-28 19:34:17.695960 | orchestrator | Wednesday 28 May 2025 19:29:37 +0000 (0:00:01.395) 0:00:06.937 ********* 2025-05-28 19:34:17.695972 | orchestrator | ok: [testbed-node-0] => { 2025-05-28 19:34:17.695983 | orchestrator |  "changed": false, 2025-05-28 19:34:17.695995 | orchestrator |  "msg": "All assertions passed" 2025-05-28 19:34:17.696006 | orchestrator | } 2025-05-28 19:34:17.696017 | orchestrator | ok: [testbed-node-1] => { 2025-05-28 19:34:17.696028 | orchestrator |  "changed": false, 2025-05-28 19:34:17.696040 | orchestrator |  "msg": "All assertions passed" 2025-05-28 19:34:17.696051 | orchestrator | } 2025-05-28 19:34:17.696062 | orchestrator | ok: [testbed-node-2] => { 2025-05-28 19:34:17.696073 | orchestrator |  "changed": false, 2025-05-28 19:34:17.696084 | orchestrator |  "msg": "All assertions passed" 2025-05-28 19:34:17.696095 | orchestrator | } 2025-05-28 19:34:17.696106 | orchestrator | ok: [testbed-node-3] => { 2025-05-28 19:34:17.696118 | orchestrator |  "changed": false, 2025-05-28 19:34:17.696129 | orchestrator |  "msg": "All assertions passed" 2025-05-28 19:34:17.696140 | orchestrator | } 2025-05-28 19:34:17.696151 | orchestrator | ok: [testbed-node-4] => { 2025-05-28 19:34:17.696162 | orchestrator |  "changed": false, 2025-05-28 19:34:17.696173 | orchestrator |  "msg": "All assertions passed" 2025-05-28 19:34:17.696184 | orchestrator | } 2025-05-28 19:34:17.696195 | orchestrator | ok: [testbed-node-5] => { 2025-05-28 19:34:17.696206 | orchestrator |  "changed": false, 2025-05-28 19:34:17.696217 | orchestrator |  "msg": "All assertions passed" 2025-05-28 19:34:17.696229 | orchestrator | } 2025-05-28 19:34:17.696239 | orchestrator | 2025-05-28 19:34:17.696251 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-28 19:34:17.696262 | orchestrator | Wednesday 28 May 2025 19:29:37 +0000 (0:00:00.826) 0:00:07.764 ********* 2025-05-28 19:34:17.696273 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.696284 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.696295 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.696306 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.696317 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.696328 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.696339 | orchestrator | 2025-05-28 19:34:17.696350 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-28 19:34:17.696361 | orchestrator | Wednesday 28 May 2025 19:29:38 +0000 (0:00:00.925) 0:00:08.689 ********* 2025-05-28 19:34:17.696372 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-28 19:34:17.696383 | orchestrator | 2025-05-28 19:34:17.696394 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-28 19:34:17.696405 | orchestrator | Wednesday 28 May 2025 19:29:42 +0000 (0:00:03.478) 0:00:12.168 ********* 2025-05-28 19:34:17.696424 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-28 19:34:17.696444 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-28 19:34:17.696456 | orchestrator | 2025-05-28 19:34:17.696479 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-28 19:34:17.696491 | orchestrator | Wednesday 28 May 2025 19:29:48 +0000 (0:00:06.528) 0:00:18.697 ********* 2025-05-28 19:34:17.696502 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:34:17.696513 | orchestrator | 2025-05-28 19:34:17.696524 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-28 19:34:17.696535 | orchestrator | Wednesday 28 May 2025 19:29:52 +0000 (0:00:03.264) 0:00:21.961 ********* 2025-05-28 19:34:17.696598 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:34:17.696612 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-28 19:34:17.696623 | orchestrator | 2025-05-28 19:34:17.696634 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-28 19:34:17.696645 | orchestrator | Wednesday 28 May 2025 19:29:55 +0000 (0:00:03.890) 0:00:25.851 ********* 2025-05-28 19:34:17.696656 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:34:17.696667 | orchestrator | 2025-05-28 19:34:17.696678 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-28 19:34:17.696688 | orchestrator | Wednesday 28 May 2025 19:29:59 +0000 (0:00:03.071) 0:00:28.923 ********* 2025-05-28 19:34:17.696699 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-28 19:34:17.696710 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-28 19:34:17.696721 | orchestrator | 2025-05-28 19:34:17.696732 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 19:34:17.696742 | orchestrator | Wednesday 28 May 2025 19:30:07 +0000 (0:00:08.455) 0:00:37.378 ********* 2025-05-28 19:34:17.696753 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.696764 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.696775 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.696786 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.696797 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.696808 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.696819 | orchestrator | 2025-05-28 19:34:17.696830 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-28 19:34:17.696841 | orchestrator | Wednesday 28 May 2025 19:30:08 +0000 (0:00:00.741) 0:00:38.120 ********* 2025-05-28 19:34:17.696852 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.696863 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.696874 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.696884 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.696895 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.696906 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.696917 | orchestrator | 2025-05-28 19:34:17.696928 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-28 19:34:17.696939 | orchestrator | Wednesday 28 May 2025 19:30:11 +0000 (0:00:03.666) 0:00:41.786 ********* 2025-05-28 19:34:17.696950 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:34:17.696961 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:34:17.696972 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:34:17.696983 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:34:17.696993 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:34:17.697004 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:34:17.697015 | orchestrator | 2025-05-28 19:34:17.697026 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-28 19:34:17.697037 | orchestrator | Wednesday 28 May 2025 19:30:13 +0000 (0:00:01.548) 0:00:43.335 ********* 2025-05-28 19:34:17.697048 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.697066 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.697077 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.697088 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.697099 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.697110 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.697121 | orchestrator | 2025-05-28 19:34:17.697132 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-28 19:34:17.697143 | orchestrator | Wednesday 28 May 2025 19:30:17 +0000 (0:00:03.980) 0:00:47.315 ********* 2025-05-28 19:34:17.697157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.697187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.697245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.697281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.697294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.697317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.697348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.697359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.697398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.697410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.697440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.697494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.697539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.697617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.697629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.697664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.697676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.697688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.697717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.697729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.698661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.698685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.698709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.698746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.698760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.698780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.698824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.698883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.698897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.698944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.698973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.699027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.699184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.699300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.699457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.699483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.699528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.699540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.699639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.699652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.699704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.699730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.699742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.699769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.699903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.699960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.699982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.699995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.700006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.700019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.700059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.700078 | orchestrator | 2025-05-28 19:34:17.700091 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-28 19:34:17.700130 | orchestrator | Wednesday 28 May 2025 19:30:22 +0000 (0:00:05.331) 0:00:52.646 ********* 2025-05-28 19:34:17.700236 | orchestrator | [WARNING]: Skipped 2025-05-28 19:34:17.700255 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-28 19:34:17.700267 | orchestrator | due to this access issue: 2025-05-28 19:34:17.700285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-28 19:34:17.700380 | orchestrator | a directory 2025-05-28 19:34:17.700394 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:34:17.700406 | orchestrator | 2025-05-28 19:34:17.700417 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 19:34:17.700428 | orchestrator | Wednesday 28 May 2025 19:30:23 +0000 (0:00:00.743) 0:00:53.390 ********* 2025-05-28 19:34:17.700440 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:34:17.700489 | orchestrator | 2025-05-28 19:34:17.700500 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-28 19:34:17.700511 | orchestrator | Wednesday 28 May 2025 19:30:25 +0000 (0:00:02.377) 0:00:55.767 ********* 2025-05-28 19:34:17.700523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.700536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.700575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.700604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.700624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.700636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.700648 | orchestrator | 2025-05-28 19:34:17.700659 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-28 19:34:17.700670 | orchestrator | Wednesday 28 May 2025 19:30:30 +0000 (0:00:04.966) 0:01:00.734 ********* 2025-05-28 19:34:17.700682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.700700 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.700712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.700723 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.700746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.700758 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.700772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.700793 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.700812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.700855 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.700873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.700903 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.700923 | orchestrator | 2025-05-28 19:34:17.700941 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-28 19:34:17.700953 | orchestrator | Wednesday 28 May 2025 19:30:34 +0000 (0:00:04.122) 0:01:04.857 ********* 2025-05-28 19:34:17.700964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.700975 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.701000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.701013 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.701025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.701036 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.701047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.701065 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.701076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.701088 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.701117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.701130 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.701141 | orchestrator | 2025-05-28 19:34:17.701157 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-28 19:34:17.701169 | orchestrator | Wednesday 28 May 2025 19:30:39 +0000 (0:00:04.561) 0:01:09.418 ********* 2025-05-28 19:34:17.701180 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.701192 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.701287 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.701298 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.701309 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.701320 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.701331 | orchestrator | 2025-05-28 19:34:17.701342 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-28 19:34:17.701353 | orchestrator | Wednesday 28 May 2025 19:30:43 +0000 (0:00:03.561) 0:01:12.979 ********* 2025-05-28 19:34:17.701364 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.701375 | orchestrator | 2025-05-28 19:34:17.701386 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-28 19:34:17.701397 | orchestrator | Wednesday 28 May 2025 19:30:43 +0000 (0:00:00.113) 0:01:13.092 ********* 2025-05-28 19:34:17.701408 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.701419 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.701430 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.701441 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.701452 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.701463 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.701474 | orchestrator | 2025-05-28 19:34:17.701485 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-28 19:34:17.701503 | orchestrator | Wednesday 28 May 2025 19:30:43 +0000 (0:00:00.801) 0:01:13.894 ********* 2025-05-28 19:34:17.701515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.701527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.701538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.703240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.703315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.703351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.703388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.703427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703457 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.703469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.703514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.703671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.703683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.703790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.703842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.703854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.703927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.703939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.703951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.703973 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.703984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.703996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.704034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.704070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.704082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704099 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.704118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.704130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.704188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.704240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.704265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.704297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.704369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.704380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.704391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.704426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704483 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.704493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.704534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.704561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.704620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.704641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.704687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.704699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.704710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.704736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.704747 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.704757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.705206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.705216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705234 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.705244 | orchestrator | 2025-05-28 19:34:17.705254 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-28 19:34:17.705264 | orchestrator | Wednesday 28 May 2025 19:30:48 +0000 (0:00:04.702) 0:01:18.596 ********* 2025-05-28 19:34:17.705275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.705296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.705392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.705462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.705513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.705632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.705670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.705711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.705789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.705842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.705955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.705982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.705991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.706000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.706073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.706116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.706209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.706235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.706274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.706291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.706329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706347 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.706355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.706364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.706393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.706447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.706464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.706567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.706607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.706632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.706661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.706687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.706696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706708 | orchestrator | 2025-05-28 19:34:17.706717 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-28 19:34:17.706725 | orchestrator | Wednesday 28 May 2025 19:30:52 +0000 (0:00:03.824) 0:01:22.421 ********* 2025-05-28 19:34:17.706748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.706757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.706805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.706853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.706897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.706930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.706953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.706982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.707001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.707050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.707096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.707169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.707277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.707310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.707357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.707432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.707467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.707499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.707516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.707569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.707615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.707653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.707726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.707764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.707790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.707814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.707867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.707876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.707915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.707933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.707941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.707950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.708029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708055 | orchestrator | 2025-05-28 19:34:17.708063 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-28 19:34:17.708071 | orchestrator | Wednesday 28 May 2025 19:31:00 +0000 (0:00:07.478) 0:01:29.899 ********* 2025-05-28 19:34:17.708080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.708088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.708136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.708211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.708228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.708275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.708292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708352 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.708360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.708415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.708441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708469 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.708478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.708487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.708533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.708650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.708676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708698 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.708717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.708726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.708913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.708939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.708948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.708985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.708994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.709023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.709039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.709079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.709096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.709138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.709152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.709182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.709194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.709215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.709247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.709660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.709674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.709706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.709719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709726 | orchestrator | 2025-05-28 19:34:17.709733 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-28 19:34:17.709740 | orchestrator | Wednesday 28 May 2025 19:31:03 +0000 (0:00:03.783) 0:01:33.683 ********* 2025-05-28 19:34:17.709747 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.709754 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.709761 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.709768 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:34:17.709775 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:34:17.709782 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:34:17.709789 | orchestrator | 2025-05-28 19:34:17.709796 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-28 19:34:17.709825 | orchestrator | Wednesday 28 May 2025 19:31:08 +0000 (0:00:04.491) 0:01:38.174 ********* 2025-05-28 19:34:17.709834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.709849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.709883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.709930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.709944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.709951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.709977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.709984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.709991 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.709998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.710006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.710067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.710115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.710129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.710162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.710273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710282 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.710290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.710298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.710370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.710425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.710450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.710487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.710495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710503 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.710511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.710519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.710578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.710627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.710641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.710674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.710681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.710696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.710745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.710767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.710778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.710809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.710837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.711176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.711190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.711197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.711205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.711218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.711230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.711267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.711275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.711283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.711290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.711301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.711308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.711356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.711367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.711374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.711381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.711449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.711459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.711466 | orchestrator | 2025-05-28 19:34:17.711473 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-28 19:34:17.711483 | orchestrator | Wednesday 28 May 2025 19:31:12 +0000 (0:00:04.160) 0:01:42.335 ********* 2025-05-28 19:34:17.711490 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.711535 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.711561 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.711569 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.711576 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.711582 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.711589 | orchestrator | 2025-05-28 19:34:17.711596 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-28 19:34:17.711603 | orchestrator | Wednesday 28 May 2025 19:31:14 +0000 (0:00:01.909) 0:01:44.244 ********* 2025-05-28 19:34:17.711610 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.711617 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.711623 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.711630 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.711637 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.711643 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.711650 | orchestrator | 2025-05-28 19:34:17.711657 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-28 19:34:17.711664 | orchestrator | Wednesday 28 May 2025 19:31:16 +0000 (0:00:01.867) 0:01:46.111 ********* 2025-05-28 19:34:17.711670 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.711677 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.711684 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.711690 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.711702 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.711709 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.711716 | orchestrator | 2025-05-28 19:34:17.711723 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-28 19:34:17.711730 | orchestrator | Wednesday 28 May 2025 19:31:18 +0000 (0:00:02.124) 0:01:48.236 ********* 2025-05-28 19:34:17.711736 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.711743 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.711750 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.711756 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.711763 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.711770 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.711777 | orchestrator | 2025-05-28 19:34:17.711783 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-28 19:34:17.711790 | orchestrator | Wednesday 28 May 2025 19:31:21 +0000 (0:00:03.074) 0:01:51.311 ********* 2025-05-28 19:34:17.711797 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.711803 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.711810 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.711817 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.711823 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.711830 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.711837 | orchestrator | 2025-05-28 19:34:17.711844 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-28 19:34:17.711850 | orchestrator | Wednesday 28 May 2025 19:31:23 +0000 (0:00:01.916) 0:01:53.227 ********* 2025-05-28 19:34:17.711857 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.711864 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.711871 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.711877 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.711884 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.711891 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.711897 | orchestrator | 2025-05-28 19:34:17.711904 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-28 19:34:17.711911 | orchestrator | Wednesday 28 May 2025 19:31:25 +0000 (0:00:02.207) 0:01:55.435 ********* 2025-05-28 19:34:17.711918 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 19:34:17.711924 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.711931 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 19:34:17.711938 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.711945 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 19:34:17.711952 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.711958 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 19:34:17.711965 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.711972 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 19:34:17.711978 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.711985 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-28 19:34:17.711992 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.711999 | orchestrator | 2025-05-28 19:34:17.712005 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-28 19:34:17.712012 | orchestrator | Wednesday 28 May 2025 19:31:27 +0000 (0:00:02.222) 0:01:57.658 ********* 2025-05-28 19:34:17.712061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.712385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.712408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.712490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.712571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.712583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.712590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.712612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.712619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.712661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.712670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.712749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.712760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.712767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.712793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.712829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.712846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.712861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.712872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.713223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.713230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.713238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.713305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713332 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.713340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.713352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713367 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.713418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.713429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.713437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713444 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.713451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.713463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.713539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.713688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.713708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.713771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.713778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713793 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.713800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.713812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.713884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.713964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.713981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.713988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.713995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.714077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.714090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714096 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.714108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.714115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.714191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.714280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.714300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.714321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.714370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714379 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.714391 | orchestrator | 2025-05-28 19:34:17.714397 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-28 19:34:17.714404 | orchestrator | Wednesday 28 May 2025 19:31:30 +0000 (0:00:02.411) 0:02:00.069 ********* 2025-05-28 19:34:17.714411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.714418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.714503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.714589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.714610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.714640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.714689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714702 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.714709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.714716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.714789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.714823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.714892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.714899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.714913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.714919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714930 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.714978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.714988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.714995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.715068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.715163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.715187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715204 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.715260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.715270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.715301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.715394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.715470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715488 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.715494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.715540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.715597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.715648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.715739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.715860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.715866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.715899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.715925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.715933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.715967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715973 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.715980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.715992 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.715998 | orchestrator | 2025-05-28 19:34:17.716012 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-28 19:34:17.716019 | orchestrator | Wednesday 28 May 2025 19:31:34 +0000 (0:00:04.127) 0:02:04.197 ********* 2025-05-28 19:34:17.716026 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716033 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716040 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716047 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716053 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716060 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716067 | orchestrator | 2025-05-28 19:34:17.716074 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-28 19:34:17.716081 | orchestrator | Wednesday 28 May 2025 19:31:37 +0000 (0:00:03.254) 0:02:07.451 ********* 2025-05-28 19:34:17.716087 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716094 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716101 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716108 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:34:17.716115 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:34:17.716121 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:34:17.716128 | orchestrator | 2025-05-28 19:34:17.716135 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-28 19:34:17.716142 | orchestrator | Wednesday 28 May 2025 19:31:43 +0000 (0:00:06.119) 0:02:13.571 ********* 2025-05-28 19:34:17.716148 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716155 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716162 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716169 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716175 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716182 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716189 | orchestrator | 2025-05-28 19:34:17.716196 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-28 19:34:17.716202 | orchestrator | Wednesday 28 May 2025 19:31:47 +0000 (0:00:03.713) 0:02:17.284 ********* 2025-05-28 19:34:17.716209 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716216 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716223 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716230 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716236 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716243 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716250 | orchestrator | 2025-05-28 19:34:17.716257 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-28 19:34:17.716263 | orchestrator | Wednesday 28 May 2025 19:31:51 +0000 (0:00:04.555) 0:02:21.840 ********* 2025-05-28 19:34:17.716270 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716277 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716283 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716290 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716297 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716304 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716310 | orchestrator | 2025-05-28 19:34:17.716317 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-28 19:34:17.716327 | orchestrator | Wednesday 28 May 2025 19:31:54 +0000 (0:00:02.288) 0:02:24.128 ********* 2025-05-28 19:34:17.716355 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716363 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716370 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716377 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716384 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716391 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716398 | orchestrator | 2025-05-28 19:34:17.716405 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-28 19:34:17.716412 | orchestrator | Wednesday 28 May 2025 19:31:56 +0000 (0:00:02.094) 0:02:26.223 ********* 2025-05-28 19:34:17.716419 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716426 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716433 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716440 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716447 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716454 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716461 | orchestrator | 2025-05-28 19:34:17.716468 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-28 19:34:17.716475 | orchestrator | Wednesday 28 May 2025 19:31:58 +0000 (0:00:02.120) 0:02:28.343 ********* 2025-05-28 19:34:17.716482 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716489 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716496 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716503 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716510 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716517 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716524 | orchestrator | 2025-05-28 19:34:17.716531 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-28 19:34:17.716538 | orchestrator | Wednesday 28 May 2025 19:32:04 +0000 (0:00:05.744) 0:02:34.088 ********* 2025-05-28 19:34:17.716556 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716563 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716570 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716577 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716584 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716591 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716598 | orchestrator | 2025-05-28 19:34:17.716605 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-28 19:34:17.716612 | orchestrator | Wednesday 28 May 2025 19:32:06 +0000 (0:00:02.048) 0:02:36.136 ********* 2025-05-28 19:34:17.716619 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716626 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716633 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716640 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716647 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716654 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716661 | orchestrator | 2025-05-28 19:34:17.716668 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-28 19:34:17.716675 | orchestrator | Wednesday 28 May 2025 19:32:08 +0000 (0:00:02.265) 0:02:38.402 ********* 2025-05-28 19:34:17.716682 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 19:34:17.716689 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.716697 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 19:34:17.716704 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.716711 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 19:34:17.716718 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.716726 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 19:34:17.716733 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.716746 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 19:34:17.716752 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.716759 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-28 19:34:17.716765 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.716771 | orchestrator | 2025-05-28 19:34:17.716778 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-28 19:34:17.716784 | orchestrator | Wednesday 28 May 2025 19:32:10 +0000 (0:00:02.221) 0:02:40.623 ********* 2025-05-28 19:34:17.716791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.716818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.716851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.716884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.716890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.716897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.716908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.716983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.717014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.717021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.717084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.717147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.717204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.717212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717244 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.717273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.717324 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.717335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.717404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717423 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.717429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.717436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.717491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.717613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.717656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717675 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.717682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.717688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.717738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.717782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.717846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.717875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.717881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717931 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.717937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.717968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.717980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.717986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.717992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.717998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718057 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.718063 | orchestrator | 2025-05-28 19:34:17.718069 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-28 19:34:17.718074 | orchestrator | Wednesday 28 May 2025 19:32:12 +0000 (0:00:02.128) 0:02:42.751 ********* 2025-05-28 19:34:17.718080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.718086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.718133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.718187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.718226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.718288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.718327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.718377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.718399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-28 19:34:17.718422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.718454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.718488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.718520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.718582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.718606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-28 19:34:17.718626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-28 19:34:17.718658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.718669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.718713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.718748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.718753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.718792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.718815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.718827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.718849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718866 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-28 19:34:17.718875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:34:17.718886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:34:17.718892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-28 19:34:17.718909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-28 19:34:17.718918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-28 19:34:17.718923 | orchestrator | 2025-05-28 19:34:17.718929 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-28 19:34:17.718935 | orchestrator | Wednesday 28 May 2025 19:32:16 +0000 (0:00:04.141) 0:02:46.892 ********* 2025-05-28 19:34:17.718940 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:34:17.718946 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:34:17.718953 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:34:17.718963 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:34:17.718970 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:34:17.718976 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:34:17.718981 | orchestrator | 2025-05-28 19:34:17.718987 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-28 19:34:17.718992 | orchestrator | Wednesday 28 May 2025 19:32:17 +0000 (0:00:00.551) 0:02:47.443 ********* 2025-05-28 19:34:17.718998 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:34:17.719003 | orchestrator | 2025-05-28 19:34:17.719009 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-28 19:34:17.719014 | orchestrator | Wednesday 28 May 2025 19:32:19 +0000 (0:00:02.377) 0:02:49.821 ********* 2025-05-28 19:34:17.719020 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:34:17.719025 | orchestrator | 2025-05-28 19:34:17.719031 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-28 19:34:17.719036 | orchestrator | Wednesday 28 May 2025 19:32:22 +0000 (0:00:02.314) 0:02:52.135 ********* 2025-05-28 19:34:17.719042 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:34:17.719048 | orchestrator | 2025-05-28 19:34:17.719053 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 19:34:17.719059 | orchestrator | Wednesday 28 May 2025 19:33:05 +0000 (0:00:42.906) 0:03:35.042 ********* 2025-05-28 19:34:17.719064 | orchestrator | 2025-05-28 19:34:17.719070 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 19:34:17.719075 | orchestrator | Wednesday 28 May 2025 19:33:05 +0000 (0:00:00.113) 0:03:35.156 ********* 2025-05-28 19:34:17.719081 | orchestrator | 2025-05-28 19:34:17.719086 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 19:34:17.719091 | orchestrator | Wednesday 28 May 2025 19:33:05 +0000 (0:00:00.444) 0:03:35.600 ********* 2025-05-28 19:34:17.719097 | orchestrator | 2025-05-28 19:34:17.719102 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 19:34:17.719108 | orchestrator | Wednesday 28 May 2025 19:33:05 +0000 (0:00:00.106) 0:03:35.706 ********* 2025-05-28 19:34:17.719113 | orchestrator | 2025-05-28 19:34:17.719119 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 19:34:17.719124 | orchestrator | Wednesday 28 May 2025 19:33:05 +0000 (0:00:00.135) 0:03:35.841 ********* 2025-05-28 19:34:17.719130 | orchestrator | 2025-05-28 19:34:17.719139 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-28 19:34:17.719144 | orchestrator | Wednesday 28 May 2025 19:33:06 +0000 (0:00:00.102) 0:03:35.944 ********* 2025-05-28 19:34:17.719150 | orchestrator | 2025-05-28 19:34:17.719155 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-28 19:34:17.719161 | orchestrator | Wednesday 28 May 2025 19:33:06 +0000 (0:00:00.482) 0:03:36.427 ********* 2025-05-28 19:34:17.719166 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:34:17.719172 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:34:17.719177 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:34:17.719183 | orchestrator | 2025-05-28 19:34:17.719188 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-28 19:34:17.719194 | orchestrator | Wednesday 28 May 2025 19:33:33 +0000 (0:00:27.272) 0:04:03.700 ********* 2025-05-28 19:34:17.719199 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:34:17.719205 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:34:17.719210 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:34:17.719216 | orchestrator | 2025-05-28 19:34:17.719223 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:34:17.719232 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-28 19:34:17.719239 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-28 19:34:17.719244 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-28 19:34:17.719250 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-28 19:34:17.719256 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-28 19:34:17.719262 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-28 19:34:17.719267 | orchestrator | 2025-05-28 19:34:17.719273 | orchestrator | 2025-05-28 19:34:17.719279 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:34:17.719284 | orchestrator | Wednesday 28 May 2025 19:34:17 +0000 (0:00:43.286) 0:04:46.986 ********* 2025-05-28 19:34:17.719290 | orchestrator | =============================================================================== 2025-05-28 19:34:17.719295 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 43.29s 2025-05-28 19:34:17.719301 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.91s 2025-05-28 19:34:17.719306 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.27s 2025-05-28 19:34:17.719312 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.46s 2025-05-28 19:34:17.719317 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.48s 2025-05-28 19:34:17.719323 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.53s 2025-05-28 19:34:17.719328 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.12s 2025-05-28 19:34:17.719334 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 5.74s 2025-05-28 19:34:17.719339 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 5.33s 2025-05-28 19:34:17.719345 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.97s 2025-05-28 19:34:17.719350 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.70s 2025-05-28 19:34:17.719356 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.56s 2025-05-28 19:34:17.719365 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.56s 2025-05-28 19:34:17.719370 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.49s 2025-05-28 19:34:17.719376 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.16s 2025-05-28 19:34:17.719381 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.14s 2025-05-28 19:34:17.719387 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.13s 2025-05-28 19:34:17.719392 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.12s 2025-05-28 19:34:17.719398 | orchestrator | Setting sysctl values --------------------------------------------------- 3.98s 2025-05-28 19:34:17.719403 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.89s 2025-05-28 19:34:17.719409 | orchestrator | 2025-05-28 19:34:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:17.719414 | orchestrator | 2025-05-28 19:34:17 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:17.719420 | orchestrator | 2025-05-28 19:34:17 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:17.719425 | orchestrator | 2025-05-28 19:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:20.738259 | orchestrator | 2025-05-28 19:34:20 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:20.738687 | orchestrator | 2025-05-28 19:34:20 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:20.741129 | orchestrator | 2025-05-28 19:34:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:20.741811 | orchestrator | 2025-05-28 19:34:20 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:20.742333 | orchestrator | 2025-05-28 19:34:20 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:20.742464 | orchestrator | 2025-05-28 19:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:23.770296 | orchestrator | 2025-05-28 19:34:23 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:23.770391 | orchestrator | 2025-05-28 19:34:23 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:23.770406 | orchestrator | 2025-05-28 19:34:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:23.770773 | orchestrator | 2025-05-28 19:34:23 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:23.776378 | orchestrator | 2025-05-28 19:34:23 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:23.776475 | orchestrator | 2025-05-28 19:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:26.796729 | orchestrator | 2025-05-28 19:34:26 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:26.798087 | orchestrator | 2025-05-28 19:34:26 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:26.798645 | orchestrator | 2025-05-28 19:34:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:26.799142 | orchestrator | 2025-05-28 19:34:26 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:26.799565 | orchestrator | 2025-05-28 19:34:26 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:26.799591 | orchestrator | 2025-05-28 19:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:29.826319 | orchestrator | 2025-05-28 19:34:29 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:29.826444 | orchestrator | 2025-05-28 19:34:29 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:29.826458 | orchestrator | 2025-05-28 19:34:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:29.827077 | orchestrator | 2025-05-28 19:34:29 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:29.827717 | orchestrator | 2025-05-28 19:34:29 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:29.827752 | orchestrator | 2025-05-28 19:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:32.855955 | orchestrator | 2025-05-28 19:34:32 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:32.856064 | orchestrator | 2025-05-28 19:34:32 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:32.856404 | orchestrator | 2025-05-28 19:34:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:32.857389 | orchestrator | 2025-05-28 19:34:32 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:32.858261 | orchestrator | 2025-05-28 19:34:32 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:32.858293 | orchestrator | 2025-05-28 19:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:35.894368 | orchestrator | 2025-05-28 19:34:35 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:35.895037 | orchestrator | 2025-05-28 19:34:35 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:35.895853 | orchestrator | 2025-05-28 19:34:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:35.897296 | orchestrator | 2025-05-28 19:34:35 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:35.898982 | orchestrator | 2025-05-28 19:34:35 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:35.899042 | orchestrator | 2025-05-28 19:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:38.932056 | orchestrator | 2025-05-28 19:34:38 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:38.932151 | orchestrator | 2025-05-28 19:34:38 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:38.933075 | orchestrator | 2025-05-28 19:34:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:38.933847 | orchestrator | 2025-05-28 19:34:38 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:38.935941 | orchestrator | 2025-05-28 19:34:38 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:38.935966 | orchestrator | 2025-05-28 19:34:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:41.969217 | orchestrator | 2025-05-28 19:34:41 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:41.969320 | orchestrator | 2025-05-28 19:34:41 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:41.969728 | orchestrator | 2025-05-28 19:34:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:41.971225 | orchestrator | 2025-05-28 19:34:41 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:41.971338 | orchestrator | 2025-05-28 19:34:41 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:41.971396 | orchestrator | 2025-05-28 19:34:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:45.028701 | orchestrator | 2025-05-28 19:34:45 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:45.032103 | orchestrator | 2025-05-28 19:34:45 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:45.035345 | orchestrator | 2025-05-28 19:34:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:45.042946 | orchestrator | 2025-05-28 19:34:45 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:45.042990 | orchestrator | 2025-05-28 19:34:45 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:45.043003 | orchestrator | 2025-05-28 19:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:48.070298 | orchestrator | 2025-05-28 19:34:48 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:48.070416 | orchestrator | 2025-05-28 19:34:48 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:48.072104 | orchestrator | 2025-05-28 19:34:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:48.072501 | orchestrator | 2025-05-28 19:34:48 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:48.079024 | orchestrator | 2025-05-28 19:34:48 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:48.079077 | orchestrator | 2025-05-28 19:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:51.118448 | orchestrator | 2025-05-28 19:34:51 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:51.119174 | orchestrator | 2025-05-28 19:34:51 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:51.119640 | orchestrator | 2025-05-28 19:34:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:51.120166 | orchestrator | 2025-05-28 19:34:51 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:51.120944 | orchestrator | 2025-05-28 19:34:51 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:51.121020 | orchestrator | 2025-05-28 19:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:54.158663 | orchestrator | 2025-05-28 19:34:54 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:54.158762 | orchestrator | 2025-05-28 19:34:54 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:54.158788 | orchestrator | 2025-05-28 19:34:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:54.162225 | orchestrator | 2025-05-28 19:34:54 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:54.162596 | orchestrator | 2025-05-28 19:34:54 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:54.162622 | orchestrator | 2025-05-28 19:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:34:57.204447 | orchestrator | 2025-05-28 19:34:57 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:34:57.204590 | orchestrator | 2025-05-28 19:34:57 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:34:57.204965 | orchestrator | 2025-05-28 19:34:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:34:57.205356 | orchestrator | 2025-05-28 19:34:57 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:34:57.205820 | orchestrator | 2025-05-28 19:34:57 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:34:57.205856 | orchestrator | 2025-05-28 19:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:00.233974 | orchestrator | 2025-05-28 19:35:00 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:00.235106 | orchestrator | 2025-05-28 19:35:00 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:00.236870 | orchestrator | 2025-05-28 19:35:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:00.236902 | orchestrator | 2025-05-28 19:35:00 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:00.236913 | orchestrator | 2025-05-28 19:35:00 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:00.236925 | orchestrator | 2025-05-28 19:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:03.268953 | orchestrator | 2025-05-28 19:35:03 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:03.269045 | orchestrator | 2025-05-28 19:35:03 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:03.269336 | orchestrator | 2025-05-28 19:35:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:03.270129 | orchestrator | 2025-05-28 19:35:03 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:03.270356 | orchestrator | 2025-05-28 19:35:03 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:03.270370 | orchestrator | 2025-05-28 19:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:06.297941 | orchestrator | 2025-05-28 19:35:06 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:06.298138 | orchestrator | 2025-05-28 19:35:06 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:06.298387 | orchestrator | 2025-05-28 19:35:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:06.299273 | orchestrator | 2025-05-28 19:35:06 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:06.299822 | orchestrator | 2025-05-28 19:35:06 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:06.299844 | orchestrator | 2025-05-28 19:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:09.328477 | orchestrator | 2025-05-28 19:35:09 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:09.328688 | orchestrator | 2025-05-28 19:35:09 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:09.328705 | orchestrator | 2025-05-28 19:35:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:09.328717 | orchestrator | 2025-05-28 19:35:09 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:09.328729 | orchestrator | 2025-05-28 19:35:09 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:09.328740 | orchestrator | 2025-05-28 19:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:12.363066 | orchestrator | 2025-05-28 19:35:12 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:12.363233 | orchestrator | 2025-05-28 19:35:12 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:12.363249 | orchestrator | 2025-05-28 19:35:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:12.363261 | orchestrator | 2025-05-28 19:35:12 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:12.363273 | orchestrator | 2025-05-28 19:35:12 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:12.363284 | orchestrator | 2025-05-28 19:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:15.382397 | orchestrator | 2025-05-28 19:35:15 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:15.382533 | orchestrator | 2025-05-28 19:35:15 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:15.383013 | orchestrator | 2025-05-28 19:35:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:15.383474 | orchestrator | 2025-05-28 19:35:15 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:15.384265 | orchestrator | 2025-05-28 19:35:15 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:15.384289 | orchestrator | 2025-05-28 19:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:18.410364 | orchestrator | 2025-05-28 19:35:18 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:18.412923 | orchestrator | 2025-05-28 19:35:18 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:18.412958 | orchestrator | 2025-05-28 19:35:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:18.412970 | orchestrator | 2025-05-28 19:35:18 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:18.412982 | orchestrator | 2025-05-28 19:35:18 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:18.412993 | orchestrator | 2025-05-28 19:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:21.450725 | orchestrator | 2025-05-28 19:35:21 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:21.450843 | orchestrator | 2025-05-28 19:35:21 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:21.451319 | orchestrator | 2025-05-28 19:35:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:21.451723 | orchestrator | 2025-05-28 19:35:21 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:21.453200 | orchestrator | 2025-05-28 19:35:21 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:21.453222 | orchestrator | 2025-05-28 19:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:24.490986 | orchestrator | 2025-05-28 19:35:24 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:24.491788 | orchestrator | 2025-05-28 19:35:24 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:24.493137 | orchestrator | 2025-05-28 19:35:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:24.494863 | orchestrator | 2025-05-28 19:35:24 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:24.495749 | orchestrator | 2025-05-28 19:35:24 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:24.495802 | orchestrator | 2025-05-28 19:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:27.554832 | orchestrator | 2025-05-28 19:35:27 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:27.554979 | orchestrator | 2025-05-28 19:35:27 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:27.555004 | orchestrator | 2025-05-28 19:35:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:27.555024 | orchestrator | 2025-05-28 19:35:27 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:27.555044 | orchestrator | 2025-05-28 19:35:27 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:27.555065 | orchestrator | 2025-05-28 19:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:30.589226 | orchestrator | 2025-05-28 19:35:30 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:30.589332 | orchestrator | 2025-05-28 19:35:30 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:30.589348 | orchestrator | 2025-05-28 19:35:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:30.589996 | orchestrator | 2025-05-28 19:35:30 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:30.590845 | orchestrator | 2025-05-28 19:35:30 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:30.590949 | orchestrator | 2025-05-28 19:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:33.636543 | orchestrator | 2025-05-28 19:35:33 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:33.637910 | orchestrator | 2025-05-28 19:35:33 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:33.639617 | orchestrator | 2025-05-28 19:35:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:33.641128 | orchestrator | 2025-05-28 19:35:33 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:33.642865 | orchestrator | 2025-05-28 19:35:33 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:33.642891 | orchestrator | 2025-05-28 19:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:36.688558 | orchestrator | 2025-05-28 19:35:36 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:36.690234 | orchestrator | 2025-05-28 19:35:36 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:36.691810 | orchestrator | 2025-05-28 19:35:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:36.693224 | orchestrator | 2025-05-28 19:35:36 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:36.694854 | orchestrator | 2025-05-28 19:35:36 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:36.694879 | orchestrator | 2025-05-28 19:35:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:39.732371 | orchestrator | 2025-05-28 19:35:39 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:39.732622 | orchestrator | 2025-05-28 19:35:39 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:39.733458 | orchestrator | 2025-05-28 19:35:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:39.734348 | orchestrator | 2025-05-28 19:35:39 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:39.736065 | orchestrator | 2025-05-28 19:35:39 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:39.736091 | orchestrator | 2025-05-28 19:35:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:42.766141 | orchestrator | 2025-05-28 19:35:42 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:42.766242 | orchestrator | 2025-05-28 19:35:42 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:42.766580 | orchestrator | 2025-05-28 19:35:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:42.767014 | orchestrator | 2025-05-28 19:35:42 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:42.767744 | orchestrator | 2025-05-28 19:35:42 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:42.767771 | orchestrator | 2025-05-28 19:35:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:45.791607 | orchestrator | 2025-05-28 19:35:45 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:45.791718 | orchestrator | 2025-05-28 19:35:45 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:45.792844 | orchestrator | 2025-05-28 19:35:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:45.793796 | orchestrator | 2025-05-28 19:35:45 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:45.794309 | orchestrator | 2025-05-28 19:35:45 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:45.794383 | orchestrator | 2025-05-28 19:35:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:48.829706 | orchestrator | 2025-05-28 19:35:48 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:48.829806 | orchestrator | 2025-05-28 19:35:48 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:48.830628 | orchestrator | 2025-05-28 19:35:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:48.831546 | orchestrator | 2025-05-28 19:35:48 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:48.832402 | orchestrator | 2025-05-28 19:35:48 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:48.832433 | orchestrator | 2025-05-28 19:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:51.874395 | orchestrator | 2025-05-28 19:35:51 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:51.875268 | orchestrator | 2025-05-28 19:35:51 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:51.876353 | orchestrator | 2025-05-28 19:35:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:51.877944 | orchestrator | 2025-05-28 19:35:51 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:51.879009 | orchestrator | 2025-05-28 19:35:51 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:51.879114 | orchestrator | 2025-05-28 19:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:54.934365 | orchestrator | 2025-05-28 19:35:54 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:54.936094 | orchestrator | 2025-05-28 19:35:54 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:54.938433 | orchestrator | 2025-05-28 19:35:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:54.941120 | orchestrator | 2025-05-28 19:35:54 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:54.942402 | orchestrator | 2025-05-28 19:35:54 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:54.942542 | orchestrator | 2025-05-28 19:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:35:57.995333 | orchestrator | 2025-05-28 19:35:57 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:35:57.996192 | orchestrator | 2025-05-28 19:35:57 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:35:57.998260 | orchestrator | 2025-05-28 19:35:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:35:57.999596 | orchestrator | 2025-05-28 19:35:57 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:35:58.001257 | orchestrator | 2025-05-28 19:35:58 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:35:58.001288 | orchestrator | 2025-05-28 19:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:01.068774 | orchestrator | 2025-05-28 19:36:01 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:01.070398 | orchestrator | 2025-05-28 19:36:01 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:01.074656 | orchestrator | 2025-05-28 19:36:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:01.074704 | orchestrator | 2025-05-28 19:36:01 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:01.075350 | orchestrator | 2025-05-28 19:36:01 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:01.075373 | orchestrator | 2025-05-28 19:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:04.122208 | orchestrator | 2025-05-28 19:36:04 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:04.123745 | orchestrator | 2025-05-28 19:36:04 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:04.127629 | orchestrator | 2025-05-28 19:36:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:04.129545 | orchestrator | 2025-05-28 19:36:04 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:04.133699 | orchestrator | 2025-05-28 19:36:04 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:04.134248 | orchestrator | 2025-05-28 19:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:07.183611 | orchestrator | 2025-05-28 19:36:07 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:07.183713 | orchestrator | 2025-05-28 19:36:07 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:07.184329 | orchestrator | 2025-05-28 19:36:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:07.185537 | orchestrator | 2025-05-28 19:36:07 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:07.186800 | orchestrator | 2025-05-28 19:36:07 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:07.186825 | orchestrator | 2025-05-28 19:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:10.233392 | orchestrator | 2025-05-28 19:36:10 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:10.233567 | orchestrator | 2025-05-28 19:36:10 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:10.234437 | orchestrator | 2025-05-28 19:36:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:10.235092 | orchestrator | 2025-05-28 19:36:10 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:10.235754 | orchestrator | 2025-05-28 19:36:10 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:10.235776 | orchestrator | 2025-05-28 19:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:13.267316 | orchestrator | 2025-05-28 19:36:13 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:13.267565 | orchestrator | 2025-05-28 19:36:13 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:13.268332 | orchestrator | 2025-05-28 19:36:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:13.268691 | orchestrator | 2025-05-28 19:36:13 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:13.269900 | orchestrator | 2025-05-28 19:36:13 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:13.269925 | orchestrator | 2025-05-28 19:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:16.297441 | orchestrator | 2025-05-28 19:36:16 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:16.297588 | orchestrator | 2025-05-28 19:36:16 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:16.297615 | orchestrator | 2025-05-28 19:36:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:16.297627 | orchestrator | 2025-05-28 19:36:16 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:16.297638 | orchestrator | 2025-05-28 19:36:16 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:16.297650 | orchestrator | 2025-05-28 19:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:19.330932 | orchestrator | 2025-05-28 19:36:19 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:19.334420 | orchestrator | 2025-05-28 19:36:19 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:19.335028 | orchestrator | 2025-05-28 19:36:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:19.340523 | orchestrator | 2025-05-28 19:36:19 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:19.341152 | orchestrator | 2025-05-28 19:36:19 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:19.341177 | orchestrator | 2025-05-28 19:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:22.373817 | orchestrator | 2025-05-28 19:36:22 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:22.373929 | orchestrator | 2025-05-28 19:36:22 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:22.374219 | orchestrator | 2025-05-28 19:36:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:22.374624 | orchestrator | 2025-05-28 19:36:22 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:22.375050 | orchestrator | 2025-05-28 19:36:22 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:22.376074 | orchestrator | 2025-05-28 19:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:25.397563 | orchestrator | 2025-05-28 19:36:25 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:25.399039 | orchestrator | 2025-05-28 19:36:25 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:25.399886 | orchestrator | 2025-05-28 19:36:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:25.400802 | orchestrator | 2025-05-28 19:36:25 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:25.402276 | orchestrator | 2025-05-28 19:36:25 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:25.402366 | orchestrator | 2025-05-28 19:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:28.437120 | orchestrator | 2025-05-28 19:36:28 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:28.437247 | orchestrator | 2025-05-28 19:36:28 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:28.437355 | orchestrator | 2025-05-28 19:36:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:28.437993 | orchestrator | 2025-05-28 19:36:28 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:28.438673 | orchestrator | 2025-05-28 19:36:28 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:28.438699 | orchestrator | 2025-05-28 19:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:31.468589 | orchestrator | 2025-05-28 19:36:31 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:31.468706 | orchestrator | 2025-05-28 19:36:31 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:31.468830 | orchestrator | 2025-05-28 19:36:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:31.472179 | orchestrator | 2025-05-28 19:36:31 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:31.472217 | orchestrator | 2025-05-28 19:36:31 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:31.472230 | orchestrator | 2025-05-28 19:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:34.510859 | orchestrator | 2025-05-28 19:36:34 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:34.511910 | orchestrator | 2025-05-28 19:36:34 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:34.514200 | orchestrator | 2025-05-28 19:36:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:34.515221 | orchestrator | 2025-05-28 19:36:34 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:34.516680 | orchestrator | 2025-05-28 19:36:34 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:34.516900 | orchestrator | 2025-05-28 19:36:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:37.574324 | orchestrator | 2025-05-28 19:36:37 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:37.574594 | orchestrator | 2025-05-28 19:36:37 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:37.575363 | orchestrator | 2025-05-28 19:36:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:37.576284 | orchestrator | 2025-05-28 19:36:37 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:37.577081 | orchestrator | 2025-05-28 19:36:37 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:37.578209 | orchestrator | 2025-05-28 19:36:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:40.621804 | orchestrator | 2025-05-28 19:36:40 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:40.622774 | orchestrator | 2025-05-28 19:36:40 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:40.628046 | orchestrator | 2025-05-28 19:36:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:40.628084 | orchestrator | 2025-05-28 19:36:40 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state STARTED 2025-05-28 19:36:40.629385 | orchestrator | 2025-05-28 19:36:40 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:40.629833 | orchestrator | 2025-05-28 19:36:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:43.683301 | orchestrator | 2025-05-28 19:36:43 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:43.685570 | orchestrator | 2025-05-28 19:36:43 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:43.687209 | orchestrator | 2025-05-28 19:36:43 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:36:43.689095 | orchestrator | 2025-05-28 19:36:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:43.693297 | orchestrator | 2025-05-28 19:36:43 | INFO  | Task 0f08a7d3-a8a9-4fe4-b46f-17d1a33c3bf6 is in state SUCCESS 2025-05-28 19:36:43.694923 | orchestrator | 2025-05-28 19:36:43.694957 | orchestrator | 2025-05-28 19:36:43.694969 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:36:43.694981 | orchestrator | 2025-05-28 19:36:43.694993 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:36:43.695005 | orchestrator | Wednesday 28 May 2025 19:32:24 +0000 (0:00:00.396) 0:00:00.396 ********* 2025-05-28 19:36:43.695032 | orchestrator | ok: [testbed-manager] 2025-05-28 19:36:43.695068 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:36:43.695079 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:36:43.695090 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:36:43.695101 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:36:43.695112 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:36:43.695145 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:36:43.695190 | orchestrator | 2025-05-28 19:36:43.695202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:36:43.695213 | orchestrator | Wednesday 28 May 2025 19:32:26 +0000 (0:00:01.626) 0:00:02.023 ********* 2025-05-28 19:36:43.695225 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-28 19:36:43.695237 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-28 19:36:43.695248 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-28 19:36:43.695259 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-28 19:36:43.695270 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-28 19:36:43.695281 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-28 19:36:43.695293 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-28 19:36:43.695304 | orchestrator | 2025-05-28 19:36:43.695315 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-28 19:36:43.695326 | orchestrator | 2025-05-28 19:36:43.695338 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-28 19:36:43.695372 | orchestrator | Wednesday 28 May 2025 19:32:27 +0000 (0:00:01.367) 0:00:03.390 ********* 2025-05-28 19:36:43.695384 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:36:43.695396 | orchestrator | 2025-05-28 19:36:43.695407 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-28 19:36:43.695418 | orchestrator | Wednesday 28 May 2025 19:32:29 +0000 (0:00:01.451) 0:00:04.842 ********* 2025-05-28 19:36:43.695433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.695480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.695543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.695709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.695727 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 19:36:43.695751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.695764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.695776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.695796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.695814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695846 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.695858 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695870 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.695893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.695910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.695927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.695984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.695997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.696016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.696035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.696055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.696067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696091 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.696120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.696140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.696214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.696230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.696242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.696254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.696333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.696360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.696373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.696440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.696485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 19:36:43.696573 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.696585 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.696597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.696609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.697675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.697705 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.697717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.697729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.697741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.697753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.697766 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.697802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.697815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.697827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.697839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.697851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.697898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.697912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.697925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.697937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.698136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.698187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.698206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.698218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.698245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.698258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.698272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.698319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.698333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.698346 | orchestrator | 2025-05-28 19:36:43.698360 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-28 19:36:43.698373 | orchestrator | Wednesday 28 May 2025 19:32:33 +0000 (0:00:03.905) 0:00:08.747 ********* 2025-05-28 19:36:43.698387 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:36:43.698400 | orchestrator | 2025-05-28 19:36:43.698413 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-28 19:36:43.698427 | orchestrator | Wednesday 28 May 2025 19:32:35 +0000 (0:00:01.952) 0:00:10.699 ********* 2025-05-28 19:36:43.698441 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 19:36:43.698515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.698536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.698550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.698572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.698591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.698631 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.698642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.698652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698690 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698774 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 19:36:43.698791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.698978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.698994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.699004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.699014 | orchestrator | 2025-05-28 19:36:43.699024 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-28 19:36:43.699034 | orchestrator | Wednesday 28 May 2025 19:32:40 +0000 (0:00:05.571) 0:00:16.271 ********* 2025-05-28 19:36:43.699045 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.699062 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699073 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699089 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.699105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699153 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699241 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.699252 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.699262 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.699272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699333 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.699344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699380 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.699390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699421 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.699441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699495 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.699506 | orchestrator | 2025-05-28 19:36:43.699516 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-28 19:36:43.699526 | orchestrator | Wednesday 28 May 2025 19:32:42 +0000 (0:00:02.122) 0:00:18.394 ********* 2025-05-28 19:36:43.699536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.699645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699665 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699695 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699712 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.699723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699734 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.699744 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.699754 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.699763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.699831 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.699841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.699871 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.699881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.699892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.700437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.700484 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.700501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-28 19:36:43.700511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.700522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.700532 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.700542 | orchestrator | 2025-05-28 19:36:43.700552 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-28 19:36:43.700562 | orchestrator | Wednesday 28 May 2025 19:32:45 +0000 (0:00:03.083) 0:00:21.477 ********* 2025-05-28 19:36:43.700572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.700583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.700600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.700621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.700631 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 19:36:43.700642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.700652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.700662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.700688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.700699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.700710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.700720 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.700731 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700741 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.700767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.700787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.700839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.700849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700885 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.700896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.700907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.700917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.700928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.700948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.700965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.700991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.701012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.701029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.701083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.701100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.701145 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 19:36:43.701156 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.701166 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.701214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.701225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.701235 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.701313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.701323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701349 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.701360 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.701379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.701400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.701417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.701427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.701437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.701527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.701548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.701574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.701599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.701628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.701638 | orchestrator | 2025-05-28 19:36:43.701648 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-28 19:36:43.701658 | orchestrator | Wednesday 28 May 2025 19:32:53 +0000 (0:00:07.266) 0:00:28.744 ********* 2025-05-28 19:36:43.701675 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:36:43.701685 | orchestrator | 2025-05-28 19:36:43.701695 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-28 19:36:43.701705 | orchestrator | Wednesday 28 May 2025 19:32:53 +0000 (0:00:00.557) 0:00:29.301 ********* 2025-05-28 19:36:43.701715 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1117211, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6336992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1117211, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6336992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701736 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1117211, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6336992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701747 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1117211, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6336992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701765 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1117221, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701773 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1117211, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6336992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701782 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1117221, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701795 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1117221, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701803 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1117221, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701812 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1117221, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701820 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1117211, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6336992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701837 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1117211, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6336992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.701846 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1117212, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701859 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1117212, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701868 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1117212, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701876 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1117212, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701884 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1117212, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701893 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1117221, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701906 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1117218, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.635699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.701918 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1117218, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.635699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702048 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1117218, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.635699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702063 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1117218, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.635699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702072 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1117218, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.635699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702080 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1117247, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702089 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1117212, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702097 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1117247, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702111 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1117247, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702134 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1117224, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6376991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702143 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1117247, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702152 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1117247, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702160 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1117218, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.635699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702168 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1117224, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6376991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702177 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1117221, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1117224, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6376991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702208 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1117224, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6376991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702217 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1117216, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702226 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1117224, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6376991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702234 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1117216, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702243 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1117247, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702251 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1117216, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702268 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1117216, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702281 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1117223, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702289 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1117223, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702298 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1117216, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702306 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1117224, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6376991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702314 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1117223, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702323 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1117244, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702344 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1117244, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702357 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1117223, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702366 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1117212, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702375 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1117223, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702383 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1117244, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702391 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1117216, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702400 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1117215, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702417 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1117215, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702429 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1117244, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702438 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1117244, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702465 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1117215, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702473 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1117230, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6386993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702481 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.702490 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1117230, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6386993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702504 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.702513 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1117223, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702525 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1117215, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702538 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1117215, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702547 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1117218, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.635699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702555 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1117230, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6386993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702564 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.702572 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1117230, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6386993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702580 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.702589 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1117244, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702603 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1117230, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6386993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702611 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.702623 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1117215, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702636 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1117230, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6386993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-28 19:36:43.702646 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.702655 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1117247, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702664 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1117224, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6376991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702673 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1117216, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702688 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1117223, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6366992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702697 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1117244, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6406991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1117215, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.634699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702726 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1117230, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6386993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-28 19:36:43.702736 | orchestrator | 2025-05-28 19:36:43.702745 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-28 19:36:43.702754 | orchestrator | Wednesday 28 May 2025 19:33:31 +0000 (0:00:37.540) 0:01:06.841 ********* 2025-05-28 19:36:43.702763 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:36:43.702772 | orchestrator | 2025-05-28 19:36:43.702781 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-28 19:36:43.702789 | orchestrator | Wednesday 28 May 2025 19:33:31 +0000 (0:00:00.374) 0:01:07.216 ********* 2025-05-28 19:36:43.702798 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.702808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.702817 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-28 19:36:43.702825 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.702834 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-28 19:36:43.702843 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:36:43.702853 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.702861 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.702871 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-28 19:36:43.702879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.702894 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-28 19:36:43.702903 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:36:43.702912 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.702921 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.702929 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-28 19:36:43.702938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.702947 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-28 19:36:43.702956 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-28 19:36:43.702965 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.702974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.702983 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-28 19:36:43.702992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.703001 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-28 19:36:43.703009 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.703021 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.703034 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-28 19:36:43.703048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.703056 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-28 19:36:43.703064 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.703072 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.703080 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-28 19:36:43.703088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.703096 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-28 19:36:43.703103 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.703111 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.703119 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-28 19:36:43.703127 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-28 19:36:43.703135 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-28 19:36:43.703143 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-28 19:36:43.703150 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 19:36:43.703158 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 19:36:43.703166 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 19:36:43.703174 | orchestrator | 2025-05-28 19:36:43.703186 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-28 19:36:43.703194 | orchestrator | Wednesday 28 May 2025 19:33:33 +0000 (0:00:01.466) 0:01:08.682 ********* 2025-05-28 19:36:43.703202 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 19:36:43.703211 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.703219 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 19:36:43.703227 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.703235 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 19:36:43.703243 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.703251 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 19:36:43.703259 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.703267 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 19:36:43.703279 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.703296 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-28 19:36:43.703309 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.703317 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-28 19:36:43.703325 | orchestrator | 2025-05-28 19:36:43.703333 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-28 19:36:43.703341 | orchestrator | Wednesday 28 May 2025 19:33:49 +0000 (0:00:16.032) 0:01:24.715 ********* 2025-05-28 19:36:43.703349 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 19:36:43.703357 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 19:36:43.703365 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.703373 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.703381 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 19:36:43.703389 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.703397 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 19:36:43.703404 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.703412 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 19:36:43.703420 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.703428 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-28 19:36:43.703436 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.703457 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-28 19:36:43.703466 | orchestrator | 2025-05-28 19:36:43.703474 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-28 19:36:43.703482 | orchestrator | Wednesday 28 May 2025 19:33:54 +0000 (0:00:05.039) 0:01:29.755 ********* 2025-05-28 19:36:43.703490 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 19:36:43.703499 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.703507 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 19:36:43.703515 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.703523 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 19:36:43.703531 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.703539 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 19:36:43.703546 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.703555 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 19:36:43.703563 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.703571 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-28 19:36:43.703579 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.703587 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-28 19:36:43.703595 | orchestrator | 2025-05-28 19:36:43.703603 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-28 19:36:43.703611 | orchestrator | Wednesday 28 May 2025 19:33:58 +0000 (0:00:04.570) 0:01:34.325 ********* 2025-05-28 19:36:43.703619 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:36:43.703632 | orchestrator | 2025-05-28 19:36:43.703640 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-28 19:36:43.703648 | orchestrator | Wednesday 28 May 2025 19:33:59 +0000 (0:00:00.350) 0:01:34.675 ********* 2025-05-28 19:36:43.703656 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.703664 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.703672 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.703684 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.703692 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.703700 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.703708 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.703715 | orchestrator | 2025-05-28 19:36:43.703724 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-28 19:36:43.703732 | orchestrator | Wednesday 28 May 2025 19:33:59 +0000 (0:00:00.762) 0:01:35.438 ********* 2025-05-28 19:36:43.703740 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.703748 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.703755 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.703763 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.703771 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:43.703779 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:43.703787 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:43.703795 | orchestrator | 2025-05-28 19:36:43.703803 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-28 19:36:43.703811 | orchestrator | Wednesday 28 May 2025 19:34:04 +0000 (0:00:04.473) 0:01:39.912 ********* 2025-05-28 19:36:43.703823 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 19:36:43.703831 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.703839 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 19:36:43.703847 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.703855 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 19:36:43.703863 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.703871 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 19:36:43.703879 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.703887 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 19:36:43.703895 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.703903 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 19:36:43.703911 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.703919 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-28 19:36:43.703927 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.703935 | orchestrator | 2025-05-28 19:36:43.703943 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-28 19:36:43.703951 | orchestrator | Wednesday 28 May 2025 19:34:07 +0000 (0:00:03.150) 0:01:43.062 ********* 2025-05-28 19:36:43.703959 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 19:36:43.703967 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.703975 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 19:36:43.703983 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.703991 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 19:36:43.704000 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.704007 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 19:36:43.704020 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.704028 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 19:36:43.704037 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.704045 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-28 19:36:43.704053 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.704061 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-28 19:36:43.704069 | orchestrator | 2025-05-28 19:36:43.704077 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-28 19:36:43.704085 | orchestrator | Wednesday 28 May 2025 19:34:10 +0000 (0:00:03.271) 0:01:46.334 ********* 2025-05-28 19:36:43.704093 | orchestrator | [WARNING]: Skipped 2025-05-28 19:36:43.704101 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-28 19:36:43.704109 | orchestrator | due to this access issue: 2025-05-28 19:36:43.704117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-28 19:36:43.704125 | orchestrator | not a directory 2025-05-28 19:36:43.704133 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-28 19:36:43.704141 | orchestrator | 2025-05-28 19:36:43.704149 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-28 19:36:43.704157 | orchestrator | Wednesday 28 May 2025 19:34:12 +0000 (0:00:01.425) 0:01:47.759 ********* 2025-05-28 19:36:43.704165 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.704173 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.704181 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.704189 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.704197 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.704205 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.704213 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.704221 | orchestrator | 2025-05-28 19:36:43.704229 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-28 19:36:43.704237 | orchestrator | Wednesday 28 May 2025 19:34:13 +0000 (0:00:00.891) 0:01:48.650 ********* 2025-05-28 19:36:43.704245 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.704253 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.704267 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.704275 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.704283 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.704291 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.704299 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.704307 | orchestrator | 2025-05-28 19:36:43.704315 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-28 19:36:43.704323 | orchestrator | Wednesday 28 May 2025 19:34:13 +0000 (0:00:00.780) 0:01:49.431 ********* 2025-05-28 19:36:43.704331 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-28 19:36:43.704339 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.704347 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-28 19:36:43.704355 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.704363 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-28 19:36:43.704371 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.704383 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-28 19:36:43.704391 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.704399 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-28 19:36:43.704415 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.704423 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-28 19:36:43.704431 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.704439 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-28 19:36:43.704491 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.704500 | orchestrator | 2025-05-28 19:36:43.704508 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-28 19:36:43.704516 | orchestrator | Wednesday 28 May 2025 19:34:17 +0000 (0:00:03.575) 0:01:53.006 ********* 2025-05-28 19:36:43.704524 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-28 19:36:43.704532 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:43.704540 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-28 19:36:43.704548 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:43.704556 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-28 19:36:43.704564 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:43.704572 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-28 19:36:43.704580 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:43.704588 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-28 19:36:43.704596 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:43.704604 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-28 19:36:43.704612 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:43.704620 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-28 19:36:43.704628 | orchestrator | skipping: [testbed-manager] 2025-05-28 19:36:43.704636 | orchestrator | 2025-05-28 19:36:43.704644 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-28 19:36:43.704652 | orchestrator | Wednesday 28 May 2025 19:34:21 +0000 (0:00:04.088) 0:01:57.095 ********* 2025-05-28 19:36:43.704661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.704674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.704689 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-28 19:36:43.704703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.704712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.704721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.704729 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.704737 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.704767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.704774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.704781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.704814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-28 19:36:43.704825 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.704837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.704848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.704855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.704877 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-28 19:36:43.704888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.704904 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.704918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-28 19:36:43.704939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.704953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.704961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.705016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.705032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.705044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705065 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.705072 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.705079 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.705111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.705121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.705136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.705143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.705191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.705203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.705214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.705225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705239 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.705265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.705275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.705292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.705325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.705359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-28 19:36:43.705374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-28 19:36:43.705386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-28 19:36:43.705397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-28 19:36:43.705407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-28 19:36:43.705421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-28 19:36:43.705428 | orchestrator | 2025-05-28 19:36:43.705435 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-28 19:36:43.705442 | orchestrator | Wednesday 28 May 2025 19:34:28 +0000 (0:00:06.424) 0:02:03.520 ********* 2025-05-28 19:36:43.705470 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-28 19:36:43.705478 | orchestrator | 2025-05-28 19:36:43.705484 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 19:36:43.705491 | orchestrator | Wednesday 28 May 2025 19:34:30 +0000 (0:00:02.484) 0:02:06.004 ********* 2025-05-28 19:36:43.705498 | orchestrator | 2025-05-28 19:36:43.705505 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 19:36:43.705511 | orchestrator | Wednesday 28 May 2025 19:34:30 +0000 (0:00:00.054) 0:02:06.059 ********* 2025-05-28 19:36:43.705518 | orchestrator | 2025-05-28 19:36:43.705525 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 19:36:43.705531 | orchestrator | Wednesday 28 May 2025 19:34:30 +0000 (0:00:00.175) 0:02:06.234 ********* 2025-05-28 19:36:43.705538 | orchestrator | 2025-05-28 19:36:43.705545 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 19:36:43.705552 | orchestrator | Wednesday 28 May 2025 19:34:30 +0000 (0:00:00.051) 0:02:06.286 ********* 2025-05-28 19:36:43.705558 | orchestrator | 2025-05-28 19:36:43.705565 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 19:36:43.705572 | orchestrator | Wednesday 28 May 2025 19:34:30 +0000 (0:00:00.057) 0:02:06.343 ********* 2025-05-28 19:36:43.705578 | orchestrator | 2025-05-28 19:36:43.705585 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 19:36:43.705592 | orchestrator | Wednesday 28 May 2025 19:34:30 +0000 (0:00:00.048) 0:02:06.391 ********* 2025-05-28 19:36:43.705598 | orchestrator | 2025-05-28 19:36:43.705605 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-28 19:36:43.705612 | orchestrator | Wednesday 28 May 2025 19:34:31 +0000 (0:00:00.172) 0:02:06.564 ********* 2025-05-28 19:36:43.705619 | orchestrator | 2025-05-28 19:36:43.705625 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-28 19:36:43.705632 | orchestrator | Wednesday 28 May 2025 19:34:31 +0000 (0:00:00.068) 0:02:06.632 ********* 2025-05-28 19:36:43.705639 | orchestrator | changed: [testbed-manager] 2025-05-28 19:36:43.705645 | orchestrator | 2025-05-28 19:36:43.705652 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-28 19:36:43.705659 | orchestrator | Wednesday 28 May 2025 19:34:50 +0000 (0:00:18.929) 0:02:25.562 ********* 2025-05-28 19:36:43.705666 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:43.705672 | orchestrator | changed: [testbed-manager] 2025-05-28 19:36:43.705679 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:36:43.705686 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:43.705693 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:36:43.705699 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:36:43.705706 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:43.705713 | orchestrator | 2025-05-28 19:36:43.705720 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-28 19:36:43.705730 | orchestrator | Wednesday 28 May 2025 19:35:12 +0000 (0:00:22.679) 0:02:48.241 ********* 2025-05-28 19:36:43.705737 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:43.705743 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:43.705750 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:43.705757 | orchestrator | 2025-05-28 19:36:43.705764 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-28 19:36:43.705770 | orchestrator | Wednesday 28 May 2025 19:35:26 +0000 (0:00:13.384) 0:03:01.626 ********* 2025-05-28 19:36:43.705777 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:43.705784 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:43.705790 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:43.705797 | orchestrator | 2025-05-28 19:36:43.705804 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-28 19:36:43.705811 | orchestrator | Wednesday 28 May 2025 19:35:38 +0000 (0:00:12.561) 0:03:14.187 ********* 2025-05-28 19:36:43.705822 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:43.705829 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:43.705836 | orchestrator | changed: [testbed-manager] 2025-05-28 19:36:43.705843 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:43.705853 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:36:43.705860 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:36:43.705867 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:36:43.705873 | orchestrator | 2025-05-28 19:36:43.705880 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-28 19:36:43.705887 | orchestrator | Wednesday 28 May 2025 19:35:57 +0000 (0:00:18.346) 0:03:32.534 ********* 2025-05-28 19:36:43.705893 | orchestrator | changed: [testbed-manager] 2025-05-28 19:36:43.705900 | orchestrator | 2025-05-28 19:36:43.705907 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-28 19:36:43.705914 | orchestrator | Wednesday 28 May 2025 19:36:06 +0000 (0:00:09.883) 0:03:42.417 ********* 2025-05-28 19:36:43.705920 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:43.705927 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:43.705934 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:43.705940 | orchestrator | 2025-05-28 19:36:43.705947 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-28 19:36:43.705954 | orchestrator | Wednesday 28 May 2025 19:36:20 +0000 (0:00:13.184) 0:03:55.602 ********* 2025-05-28 19:36:43.705961 | orchestrator | changed: [testbed-manager] 2025-05-28 19:36:43.705967 | orchestrator | 2025-05-28 19:36:43.705974 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-28 19:36:43.705981 | orchestrator | Wednesday 28 May 2025 19:36:27 +0000 (0:00:07.296) 0:04:02.899 ********* 2025-05-28 19:36:43.705988 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:36:43.705994 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:36:43.706001 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:36:43.706008 | orchestrator | 2025-05-28 19:36:43.706014 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:36:43.706058 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-28 19:36:43.706066 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-28 19:36:43.706072 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-28 19:36:43.706080 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-28 19:36:43.706086 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-28 19:36:43.706093 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-28 19:36:43.706100 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-28 19:36:43.706107 | orchestrator | 2025-05-28 19:36:43.706114 | orchestrator | 2025-05-28 19:36:43.706120 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:36:43.706127 | orchestrator | Wednesday 28 May 2025 19:36:40 +0000 (0:00:13.444) 0:04:16.344 ********* 2025-05-28 19:36:43.706134 | orchestrator | =============================================================================== 2025-05-28 19:36:43.706141 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 37.54s 2025-05-28 19:36:43.706147 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 22.68s 2025-05-28 19:36:43.706154 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.93s 2025-05-28 19:36:43.706165 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.35s 2025-05-28 19:36:43.706172 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.03s 2025-05-28 19:36:43.706178 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.44s 2025-05-28 19:36:43.706185 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.38s 2025-05-28 19:36:43.706192 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.18s 2025-05-28 19:36:43.706198 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.56s 2025-05-28 19:36:43.706205 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.88s 2025-05-28 19:36:43.706217 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.30s 2025-05-28 19:36:43.706224 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.27s 2025-05-28 19:36:43.706231 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.42s 2025-05-28 19:36:43.706237 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.57s 2025-05-28 19:36:43.706244 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.04s 2025-05-28 19:36:43.706250 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.57s 2025-05-28 19:36:43.706257 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.47s 2025-05-28 19:36:43.706264 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 4.09s 2025-05-28 19:36:43.706270 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.91s 2025-05-28 19:36:43.706277 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 3.58s 2025-05-28 19:36:43.706287 | orchestrator | 2025-05-28 19:36:43 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:43.706294 | orchestrator | 2025-05-28 19:36:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:46.742610 | orchestrator | 2025-05-28 19:36:46 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state STARTED 2025-05-28 19:36:46.743086 | orchestrator | 2025-05-28 19:36:46 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:46.745923 | orchestrator | 2025-05-28 19:36:46 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:36:46.747351 | orchestrator | 2025-05-28 19:36:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:46.749394 | orchestrator | 2025-05-28 19:36:46 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state STARTED 2025-05-28 19:36:46.749531 | orchestrator | 2025-05-28 19:36:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:49.805677 | orchestrator | 2025-05-28 19:36:49 | INFO  | Task d7ce1f5d-77f4-47bc-9b93-a862026a8c6f is in state SUCCESS 2025-05-28 19:36:49.807782 | orchestrator | 2025-05-28 19:36:49.807836 | orchestrator | 2025-05-28 19:36:49.807850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:36:49.807861 | orchestrator | 2025-05-28 19:36:49.807873 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:36:49.807884 | orchestrator | Wednesday 28 May 2025 19:33:42 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-05-28 19:36:49.807896 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:36:49.807908 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:36:49.807920 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:36:49.807931 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:36:49.807942 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:36:49.807953 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:36:49.807964 | orchestrator | 2025-05-28 19:36:49.807975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:36:49.808010 | orchestrator | Wednesday 28 May 2025 19:33:42 +0000 (0:00:00.850) 0:00:01.101 ********* 2025-05-28 19:36:49.808022 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-28 19:36:49.808033 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-28 19:36:49.808045 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-28 19:36:49.808056 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-28 19:36:49.808067 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-28 19:36:49.808077 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-28 19:36:49.808089 | orchestrator | 2025-05-28 19:36:49.808100 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-28 19:36:49.808111 | orchestrator | 2025-05-28 19:36:49.808122 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 19:36:49.808133 | orchestrator | Wednesday 28 May 2025 19:33:43 +0000 (0:00:00.958) 0:00:02.059 ********* 2025-05-28 19:36:49.808145 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:36:49.808158 | orchestrator | 2025-05-28 19:36:49.808169 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-28 19:36:49.808180 | orchestrator | Wednesday 28 May 2025 19:33:45 +0000 (0:00:01.368) 0:00:03.428 ********* 2025-05-28 19:36:49.808192 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-28 19:36:49.808203 | orchestrator | 2025-05-28 19:36:49.808214 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-28 19:36:49.808343 | orchestrator | Wednesday 28 May 2025 19:33:48 +0000 (0:00:03.296) 0:00:06.724 ********* 2025-05-28 19:36:49.808360 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-28 19:36:49.808375 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-28 19:36:49.808388 | orchestrator | 2025-05-28 19:36:49.808400 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-28 19:36:49.808412 | orchestrator | Wednesday 28 May 2025 19:33:55 +0000 (0:00:06.503) 0:00:13.228 ********* 2025-05-28 19:36:49.808425 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:36:49.808438 | orchestrator | 2025-05-28 19:36:49.808509 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-28 19:36:49.808522 | orchestrator | Wednesday 28 May 2025 19:33:58 +0000 (0:00:03.552) 0:00:16.780 ********* 2025-05-28 19:36:49.808535 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:36:49.808547 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-28 19:36:49.808559 | orchestrator | 2025-05-28 19:36:49.808571 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-28 19:36:49.808584 | orchestrator | Wednesday 28 May 2025 19:34:02 +0000 (0:00:03.917) 0:00:20.698 ********* 2025-05-28 19:36:49.808597 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:36:49.808610 | orchestrator | 2025-05-28 19:36:49.808622 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-28 19:36:49.808635 | orchestrator | Wednesday 28 May 2025 19:34:05 +0000 (0:00:03.399) 0:00:24.097 ********* 2025-05-28 19:36:49.808647 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-28 19:36:49.808659 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-28 19:36:49.808672 | orchestrator | 2025-05-28 19:36:49.808684 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-28 19:36:49.809114 | orchestrator | Wednesday 28 May 2025 19:34:14 +0000 (0:00:08.117) 0:00:32.215 ********* 2025-05-28 19:36:49.809172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.809216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.809238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.809269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.809334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.809356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.809423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.809761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.809809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.810114 | orchestrator | 2025-05-28 19:36:49.810134 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 19:36:49.810147 | orchestrator | Wednesday 28 May 2025 19:34:16 +0000 (0:00:02.760) 0:00:34.976 ********* 2025-05-28 19:36:49.810158 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.810170 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.810181 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.810193 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:36:49.810204 | orchestrator | 2025-05-28 19:36:49.810215 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-28 19:36:49.810226 | orchestrator | Wednesday 28 May 2025 19:34:17 +0000 (0:00:01.189) 0:00:36.165 ********* 2025-05-28 19:36:49.810237 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-28 19:36:49.810248 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-28 19:36:49.810259 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-28 19:36:49.810271 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-28 19:36:49.810281 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-28 19:36:49.810292 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-28 19:36:49.810323 | orchestrator | 2025-05-28 19:36:49.810342 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-28 19:36:49.810368 | orchestrator | Wednesday 28 May 2025 19:34:22 +0000 (0:00:04.348) 0:00:40.513 ********* 2025-05-28 19:36:49.810382 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 19:36:49.810395 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 19:36:49.810419 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 19:36:49.810431 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 19:36:49.810480 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 19:36:49.810509 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-28 19:36:49.810521 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 19:36:49.810541 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 19:36:49.810554 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 19:36:49.810571 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 19:36:49.810596 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 19:36:49.810625 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-28 19:36:49.810646 | orchestrator | 2025-05-28 19:36:49.810666 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-28 19:36:49.810684 | orchestrator | Wednesday 28 May 2025 19:34:27 +0000 (0:00:04.904) 0:00:45.418 ********* 2025-05-28 19:36:49.810704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.810724 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.810741 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.810753 | orchestrator | 2025-05-28 19:36:49.810764 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-28 19:36:49.810775 | orchestrator | Wednesday 28 May 2025 19:34:28 +0000 (0:00:01.695) 0:00:47.113 ********* 2025-05-28 19:36:49.810786 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-28 19:36:49.810799 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-28 19:36:49.810818 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-28 19:36:49.810836 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 19:36:49.810854 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 19:36:49.810871 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-28 19:36:49.810887 | orchestrator | 2025-05-28 19:36:49.810906 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-28 19:36:49.811042 | orchestrator | Wednesday 28 May 2025 19:34:31 +0000 (0:00:02.943) 0:00:50.057 ********* 2025-05-28 19:36:49.811066 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-28 19:36:49.811078 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-28 19:36:49.811089 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-28 19:36:49.811100 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-28 19:36:49.811111 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-28 19:36:49.811122 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-28 19:36:49.811133 | orchestrator | 2025-05-28 19:36:49.811144 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-28 19:36:49.811154 | orchestrator | Wednesday 28 May 2025 19:34:32 +0000 (0:00:01.135) 0:00:51.192 ********* 2025-05-28 19:36:49.811165 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.811176 | orchestrator | 2025-05-28 19:36:49.811187 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-28 19:36:49.811198 | orchestrator | Wednesday 28 May 2025 19:34:33 +0000 (0:00:00.179) 0:00:51.372 ********* 2025-05-28 19:36:49.811209 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.811220 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.811231 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.811242 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:49.811253 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:49.811264 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:49.811275 | orchestrator | 2025-05-28 19:36:49.811292 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 19:36:49.811303 | orchestrator | Wednesday 28 May 2025 19:34:34 +0000 (0:00:00.977) 0:00:52.350 ********* 2025-05-28 19:36:49.811315 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:36:49.811328 | orchestrator | 2025-05-28 19:36:49.811339 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-28 19:36:49.811350 | orchestrator | Wednesday 28 May 2025 19:34:35 +0000 (0:00:01.018) 0:00:53.368 ********* 2025-05-28 19:36:49.811361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.811414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.811436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.811478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.811640 | orchestrator | 2025-05-28 19:36:49.811651 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-28 19:36:49.811663 | orchestrator | Wednesday 28 May 2025 19:34:38 +0000 (0:00:03.722) 0:00:57.090 ********* 2025-05-28 19:36:49.811708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.811729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811741 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.811753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.811770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.811826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811846 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.811858 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.811870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811893 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:49.811910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811933 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:49.811975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.811996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812007 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:49.812019 | orchestrator | 2025-05-28 19:36:49.812030 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-28 19:36:49.812041 | orchestrator | Wednesday 28 May 2025 19:34:41 +0000 (0:00:02.607) 0:00:59.698 ********* 2025-05-28 19:36:49.812053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.812070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.812135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812148 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.812160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.812188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812218 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.812229 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:49.812241 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.812283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812308 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:49.812320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812348 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:49.812359 | orchestrator | 2025-05-28 19:36:49.812370 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-28 19:36:49.812382 | orchestrator | Wednesday 28 May 2025 19:34:44 +0000 (0:00:02.711) 0:01:02.409 ********* 2025-05-28 19:36:49.812393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.812478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.812526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.812545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.812576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.812639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.812656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.812678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.812721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.812763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.812782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.812794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.812951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.812994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813058 | orchestrator | 2025-05-28 19:36:49.813069 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-28 19:36:49.813081 | orchestrator | Wednesday 28 May 2025 19:34:47 +0000 (0:00:02.861) 0:01:05.270 ********* 2025-05-28 19:36:49.813092 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-28 19:36:49.813103 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:49.813114 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-28 19:36:49.813125 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:49.813136 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-28 19:36:49.813147 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:49.813158 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-28 19:36:49.813169 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-28 19:36:49.813180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-28 19:36:49.813191 | orchestrator | 2025-05-28 19:36:49.813202 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-28 19:36:49.813220 | orchestrator | Wednesday 28 May 2025 19:34:49 +0000 (0:00:02.771) 0:01:08.041 ********* 2025-05-28 19:36:49.813236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.813249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.813280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.813319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.813343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.813361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.813419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.813605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813643 | orchestrator | 2025-05-28 19:36:49.813667 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-28 19:36:49.813686 | orchestrator | Wednesday 28 May 2025 19:35:05 +0000 (0:00:15.773) 0:01:23.815 ********* 2025-05-28 19:36:49.813704 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.813722 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.813740 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.813761 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:36:49.813784 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:36:49.813801 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:36:49.813819 | orchestrator | 2025-05-28 19:36:49.813837 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-28 19:36:49.813860 | orchestrator | Wednesday 28 May 2025 19:35:08 +0000 (0:00:03.157) 0:01:26.972 ********* 2025-05-28 19:36:49.813878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.813909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.813980 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.814009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.814074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814156 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.814176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.814209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814285 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.814311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.814335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814412 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:49.814432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.814526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814602 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:49.814630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.814797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.814848 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:49.814859 | orchestrator | 2025-05-28 19:36:49.814871 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-28 19:36:49.814882 | orchestrator | Wednesday 28 May 2025 19:35:10 +0000 (0:00:01.347) 0:01:28.320 ********* 2025-05-28 19:36:49.814893 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.814904 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.814914 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.814924 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:49.814934 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:49.814944 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:49.814954 | orchestrator | 2025-05-28 19:36:49.814964 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-28 19:36:49.814973 | orchestrator | Wednesday 28 May 2025 19:35:10 +0000 (0:00:00.700) 0:01:29.020 ********* 2025-05-28 19:36:49.814992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.815011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.815037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-28 19:36:49.815059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.815108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.815118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-28 19:36:49.815143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-28 19:36:49.815310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-28 19:36:49.815320 | orchestrator | 2025-05-28 19:36:49.815330 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-28 19:36:49.815340 | orchestrator | Wednesday 28 May 2025 19:35:13 +0000 (0:00:02.965) 0:01:31.985 ********* 2025-05-28 19:36:49.815350 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.815360 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.815369 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.815379 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:36:49.815389 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:36:49.815398 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:36:49.815410 | orchestrator | 2025-05-28 19:36:49.815421 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-28 19:36:49.815431 | orchestrator | Wednesday 28 May 2025 19:35:15 +0000 (0:00:01.276) 0:01:33.262 ********* 2025-05-28 19:36:49.815618 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.815649 | orchestrator | 2025-05-28 19:36:49.815659 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-28 19:36:49.815668 | orchestrator | Wednesday 28 May 2025 19:35:17 +0000 (0:00:02.635) 0:01:35.898 ********* 2025-05-28 19:36:49.815677 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.815692 | orchestrator | 2025-05-28 19:36:49.815701 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-28 19:36:49.815719 | orchestrator | Wednesday 28 May 2025 19:35:19 +0000 (0:00:02.183) 0:01:38.081 ********* 2025-05-28 19:36:49.815728 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.815737 | orchestrator | 2025-05-28 19:36:49.815746 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 19:36:49.815755 | orchestrator | Wednesday 28 May 2025 19:35:37 +0000 (0:00:17.710) 0:01:55.791 ********* 2025-05-28 19:36:49.815763 | orchestrator | 2025-05-28 19:36:49.815771 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 19:36:49.815779 | orchestrator | Wednesday 28 May 2025 19:35:37 +0000 (0:00:00.079) 0:01:55.870 ********* 2025-05-28 19:36:49.815787 | orchestrator | 2025-05-28 19:36:49.815795 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 19:36:49.815803 | orchestrator | Wednesday 28 May 2025 19:35:37 +0000 (0:00:00.212) 0:01:56.083 ********* 2025-05-28 19:36:49.815811 | orchestrator | 2025-05-28 19:36:49.815819 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 19:36:49.815827 | orchestrator | Wednesday 28 May 2025 19:35:37 +0000 (0:00:00.053) 0:01:56.136 ********* 2025-05-28 19:36:49.815834 | orchestrator | 2025-05-28 19:36:49.815842 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 19:36:49.815850 | orchestrator | Wednesday 28 May 2025 19:35:37 +0000 (0:00:00.050) 0:01:56.187 ********* 2025-05-28 19:36:49.815858 | orchestrator | 2025-05-28 19:36:49.815866 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-28 19:36:49.815874 | orchestrator | Wednesday 28 May 2025 19:35:38 +0000 (0:00:00.051) 0:01:56.238 ********* 2025-05-28 19:36:49.815881 | orchestrator | 2025-05-28 19:36:49.815889 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-28 19:36:49.815897 | orchestrator | Wednesday 28 May 2025 19:35:38 +0000 (0:00:00.226) 0:01:56.465 ********* 2025-05-28 19:36:49.815905 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.815913 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:49.815921 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:49.815929 | orchestrator | 2025-05-28 19:36:49.815937 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-28 19:36:49.815944 | orchestrator | Wednesday 28 May 2025 19:36:02 +0000 (0:00:24.261) 0:02:20.726 ********* 2025-05-28 19:36:49.815952 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.815960 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:49.815968 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:49.815976 | orchestrator | 2025-05-28 19:36:49.815984 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-28 19:36:49.816002 | orchestrator | Wednesday 28 May 2025 19:36:08 +0000 (0:00:06.375) 0:02:27.101 ********* 2025-05-28 19:36:49.816011 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:36:49.816019 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:36:49.816027 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:36:49.816035 | orchestrator | 2025-05-28 19:36:49.816043 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-28 19:36:49.816051 | orchestrator | Wednesday 28 May 2025 19:36:35 +0000 (0:00:26.712) 0:02:53.813 ********* 2025-05-28 19:36:49.816058 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:36:49.816067 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:36:49.816075 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:36:49.816083 | orchestrator | 2025-05-28 19:36:49.816091 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-28 19:36:49.816099 | orchestrator | Wednesday 28 May 2025 19:36:46 +0000 (0:00:10.675) 0:03:04.489 ********* 2025-05-28 19:36:49.816107 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.816115 | orchestrator | 2025-05-28 19:36:49.816123 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:36:49.816131 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-28 19:36:49.816145 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 19:36:49.816154 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-28 19:36:49.816162 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:36:49.816170 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:36:49.816178 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:36:49.816186 | orchestrator | 2025-05-28 19:36:49.816194 | orchestrator | 2025-05-28 19:36:49.816202 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:36:49.816210 | orchestrator | Wednesday 28 May 2025 19:36:46 +0000 (0:00:00.560) 0:03:05.049 ********* 2025-05-28 19:36:49.816218 | orchestrator | =============================================================================== 2025-05-28 19:36:49.816226 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.71s 2025-05-28 19:36:49.816234 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.26s 2025-05-28 19:36:49.816241 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.71s 2025-05-28 19:36:49.816249 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.77s 2025-05-28 19:36:49.816264 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.68s 2025-05-28 19:36:49.816272 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.12s 2025-05-28 19:36:49.816280 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.50s 2025-05-28 19:36:49.816288 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.38s 2025-05-28 19:36:49.816296 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.90s 2025-05-28 19:36:49.816304 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.35s 2025-05-28 19:36:49.816312 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.92s 2025-05-28 19:36:49.816320 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.72s 2025-05-28 19:36:49.816328 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.55s 2025-05-28 19:36:49.816336 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.40s 2025-05-28 19:36:49.816475 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.30s 2025-05-28 19:36:49.816485 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.16s 2025-05-28 19:36:49.816493 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.97s 2025-05-28 19:36:49.816501 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.94s 2025-05-28 19:36:49.816509 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.86s 2025-05-28 19:36:49.816517 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.77s 2025-05-28 19:36:49.816525 | orchestrator | 2025-05-28 19:36:49 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:49.816533 | orchestrator | 2025-05-28 19:36:49 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:36:49.816541 | orchestrator | 2025-05-28 19:36:49 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:36:49.816549 | orchestrator | 2025-05-28 19:36:49 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:36:49.816568 | orchestrator | 2025-05-28 19:36:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:49.818093 | orchestrator | 2025-05-28 19:36:49 | INFO  | Task 00472a6a-108e-44ca-a404-4de7ad457082 is in state SUCCESS 2025-05-28 19:36:49.819901 | orchestrator | 2025-05-28 19:36:49.819923 | orchestrator | 2025-05-28 19:36:49.819931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:36:49.819940 | orchestrator | 2025-05-28 19:36:49.819948 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:36:49.819956 | orchestrator | Wednesday 28 May 2025 19:33:26 +0000 (0:00:00.320) 0:00:00.320 ********* 2025-05-28 19:36:49.819964 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:36:49.819973 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:36:49.819981 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:36:49.819989 | orchestrator | 2025-05-28 19:36:49.819997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:36:49.820005 | orchestrator | Wednesday 28 May 2025 19:33:26 +0000 (0:00:00.745) 0:00:01.065 ********* 2025-05-28 19:36:49.820014 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-28 19:36:49.820022 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-28 19:36:49.820030 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-28 19:36:49.820038 | orchestrator | 2025-05-28 19:36:49.820046 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-28 19:36:49.820054 | orchestrator | 2025-05-28 19:36:49.820062 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 19:36:49.820070 | orchestrator | Wednesday 28 May 2025 19:33:27 +0000 (0:00:00.678) 0:00:01.744 ********* 2025-05-28 19:36:49.820078 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:36:49.820086 | orchestrator | 2025-05-28 19:36:49.820094 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-28 19:36:49.820102 | orchestrator | Wednesday 28 May 2025 19:33:28 +0000 (0:00:01.254) 0:00:02.999 ********* 2025-05-28 19:36:49.820110 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-28 19:36:49.820118 | orchestrator | 2025-05-28 19:36:49.820126 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-28 19:36:49.820134 | orchestrator | Wednesday 28 May 2025 19:33:32 +0000 (0:00:03.348) 0:00:06.347 ********* 2025-05-28 19:36:49.820142 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-28 19:36:49.820150 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-28 19:36:49.820158 | orchestrator | 2025-05-28 19:36:49.820166 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-28 19:36:49.820174 | orchestrator | Wednesday 28 May 2025 19:33:38 +0000 (0:00:06.576) 0:00:12.924 ********* 2025-05-28 19:36:49.820182 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:36:49.820190 | orchestrator | 2025-05-28 19:36:49.820198 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-28 19:36:49.820206 | orchestrator | Wednesday 28 May 2025 19:33:41 +0000 (0:00:03.228) 0:00:16.152 ********* 2025-05-28 19:36:49.820214 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:36:49.820229 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-28 19:36:49.820238 | orchestrator | 2025-05-28 19:36:49.820246 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-28 19:36:49.820254 | orchestrator | Wednesday 28 May 2025 19:33:45 +0000 (0:00:04.063) 0:00:20.215 ********* 2025-05-28 19:36:49.820262 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:36:49.820270 | orchestrator | 2025-05-28 19:36:49.820278 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-28 19:36:49.820296 | orchestrator | Wednesday 28 May 2025 19:33:49 +0000 (0:00:03.211) 0:00:23.426 ********* 2025-05-28 19:36:49.820304 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-28 19:36:49.820312 | orchestrator | 2025-05-28 19:36:49.820320 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-28 19:36:49.820386 | orchestrator | Wednesday 28 May 2025 19:33:53 +0000 (0:00:04.050) 0:00:27.477 ********* 2025-05-28 19:36:49.820408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.820427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.820480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.820497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.820511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.820533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.820773 | orchestrator | 2025-05-28 19:36:49.820782 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 19:36:49.820792 | orchestrator | Wednesday 28 May 2025 19:33:58 +0000 (0:00:05.347) 0:00:32.825 ********* 2025-05-28 19:36:49.820801 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:36:49.820810 | orchestrator | 2025-05-28 19:36:49.820819 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-28 19:36:49.820828 | orchestrator | Wednesday 28 May 2025 19:33:58 +0000 (0:00:00.387) 0:00:33.212 ********* 2025-05-28 19:36:49.820837 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.820846 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:49.820861 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:49.820870 | orchestrator | 2025-05-28 19:36:49.820925 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-28 19:36:49.820935 | orchestrator | Wednesday 28 May 2025 19:34:06 +0000 (0:00:08.033) 0:00:41.246 ********* 2025-05-28 19:36:49.820948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.820957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.820965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.820974 | orchestrator | 2025-05-28 19:36:49.820982 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-28 19:36:49.820990 | orchestrator | Wednesday 28 May 2025 19:34:08 +0000 (0:00:01.951) 0:00:43.198 ********* 2025-05-28 19:36:49.820998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.821006 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.821014 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-28 19:36:49.821022 | orchestrator | 2025-05-28 19:36:49.821029 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-28 19:36:49.821038 | orchestrator | Wednesday 28 May 2025 19:34:10 +0000 (0:00:01.230) 0:00:44.428 ********* 2025-05-28 19:36:49.821046 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:36:49.821054 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:36:49.821062 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:36:49.821070 | orchestrator | 2025-05-28 19:36:49.821078 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-28 19:36:49.821086 | orchestrator | Wednesday 28 May 2025 19:34:10 +0000 (0:00:00.686) 0:00:45.114 ********* 2025-05-28 19:36:49.821094 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821102 | orchestrator | 2025-05-28 19:36:49.821109 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-28 19:36:49.821117 | orchestrator | Wednesday 28 May 2025 19:34:10 +0000 (0:00:00.097) 0:00:45.212 ********* 2025-05-28 19:36:49.821125 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821133 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821141 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821149 | orchestrator | 2025-05-28 19:36:49.821157 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 19:36:49.821165 | orchestrator | Wednesday 28 May 2025 19:34:11 +0000 (0:00:00.305) 0:00:45.518 ********* 2025-05-28 19:36:49.821173 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:36:49.821181 | orchestrator | 2025-05-28 19:36:49.821189 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-28 19:36:49.821197 | orchestrator | Wednesday 28 May 2025 19:34:11 +0000 (0:00:00.563) 0:00:46.082 ********* 2025-05-28 19:36:49.821213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.821234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.821250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.821267 | orchestrator | 2025-05-28 19:36:49.821275 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-28 19:36:49.821283 | orchestrator | Wednesday 28 May 2025 19:34:16 +0000 (0:00:04.584) 0:00:50.666 ********* 2025-05-28 19:36:49.821295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:36:49.821304 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:36:49.821336 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:36:49.821361 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821370 | orchestrator | 2025-05-28 19:36:49.821378 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-28 19:36:49.821386 | orchestrator | Wednesday 28 May 2025 19:34:22 +0000 (0:00:05.897) 0:00:56.563 ********* 2025-05-28 19:36:49.821400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:36:49.821409 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:36:49.821432 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-28 19:36:49.821472 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821481 | orchestrator | 2025-05-28 19:36:49.821489 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-28 19:36:49.821497 | orchestrator | Wednesday 28 May 2025 19:34:27 +0000 (0:00:05.366) 0:01:01.930 ********* 2025-05-28 19:36:49.821505 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821513 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821521 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821530 | orchestrator | 2025-05-28 19:36:49.821542 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-28 19:36:49.821555 | orchestrator | Wednesday 28 May 2025 19:34:30 +0000 (0:00:03.284) 0:01:05.214 ********* 2025-05-28 19:36:49.821564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.821577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.821593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.821612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.821626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.821684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.821695 | orchestrator | 2025-05-28 19:36:49.821703 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-28 19:36:49.821711 | orchestrator | Wednesday 28 May 2025 19:34:35 +0000 (0:00:04.116) 0:01:09.330 ********* 2025-05-28 19:36:49.821719 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:49.821728 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:49.821736 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.821744 | orchestrator | 2025-05-28 19:36:49.821752 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-28 19:36:49.821760 | orchestrator | Wednesday 28 May 2025 19:34:46 +0000 (0:00:11.716) 0:01:21.047 ********* 2025-05-28 19:36:49.821793 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821802 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821810 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821819 | orchestrator | 2025-05-28 19:36:49.821827 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-28 19:36:49.821835 | orchestrator | Wednesday 28 May 2025 19:34:59 +0000 (0:00:13.101) 0:01:34.148 ********* 2025-05-28 19:36:49.821849 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821857 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821865 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821873 | orchestrator | 2025-05-28 19:36:49.821881 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-28 19:36:49.821889 | orchestrator | Wednesday 28 May 2025 19:35:10 +0000 (0:00:10.267) 0:01:44.415 ********* 2025-05-28 19:36:49.821897 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821905 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821913 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821921 | orchestrator | 2025-05-28 19:36:49.821929 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-28 19:36:49.821937 | orchestrator | Wednesday 28 May 2025 19:35:17 +0000 (0:00:07.801) 0:01:52.217 ********* 2025-05-28 19:36:49.821946 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.821958 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.821967 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.821975 | orchestrator | 2025-05-28 19:36:49.821983 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-28 19:36:49.821991 | orchestrator | Wednesday 28 May 2025 19:35:23 +0000 (0:00:05.418) 0:01:57.636 ********* 2025-05-28 19:36:49.821999 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.822007 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.822051 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.822062 | orchestrator | 2025-05-28 19:36:49.822070 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-28 19:36:49.822078 | orchestrator | Wednesday 28 May 2025 19:35:23 +0000 (0:00:00.277) 0:01:57.914 ********* 2025-05-28 19:36:49.822086 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-28 19:36:49.822095 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.822103 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-28 19:36:49.822111 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.822119 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-28 19:36:49.822127 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.822135 | orchestrator | 2025-05-28 19:36:49.822143 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-28 19:36:49.822151 | orchestrator | Wednesday 28 May 2025 19:35:28 +0000 (0:00:04.619) 0:02:02.533 ********* 2025-05-28 19:36:49.822176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.822198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.822212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.822226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.822252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-28 19:36:49.822275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-28 19:36:49.822292 | orchestrator | 2025-05-28 19:36:49.822300 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-28 19:36:49.822308 | orchestrator | Wednesday 28 May 2025 19:35:32 +0000 (0:00:04.580) 0:02:07.113 ********* 2025-05-28 19:36:49.822316 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:36:49.822324 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:36:49.822333 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:36:49.822341 | orchestrator | 2025-05-28 19:36:49.822353 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-28 19:36:49.822362 | orchestrator | Wednesday 28 May 2025 19:35:33 +0000 (0:00:00.310) 0:02:07.424 ********* 2025-05-28 19:36:49.822369 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.822377 | orchestrator | 2025-05-28 19:36:49.822385 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-28 19:36:49.822402 | orchestrator | Wednesday 28 May 2025 19:35:35 +0000 (0:00:02.164) 0:02:09.589 ********* 2025-05-28 19:36:49.822410 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.822427 | orchestrator | 2025-05-28 19:36:49.822435 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-28 19:36:49.822465 | orchestrator | Wednesday 28 May 2025 19:35:37 +0000 (0:00:02.307) 0:02:11.896 ********* 2025-05-28 19:36:49.822473 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.822481 | orchestrator | 2025-05-28 19:36:49.822489 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-28 19:36:49.822497 | orchestrator | Wednesday 28 May 2025 19:35:39 +0000 (0:00:02.188) 0:02:14.085 ********* 2025-05-28 19:36:49.822505 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.822513 | orchestrator | 2025-05-28 19:36:49.822521 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-28 19:36:49.822529 | orchestrator | Wednesday 28 May 2025 19:36:08 +0000 (0:00:28.341) 0:02:42.426 ********* 2025-05-28 19:36:49.822537 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.822545 | orchestrator | 2025-05-28 19:36:49.822553 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-28 19:36:49.822561 | orchestrator | Wednesday 28 May 2025 19:36:10 +0000 (0:00:02.315) 0:02:44.741 ********* 2025-05-28 19:36:49.822569 | orchestrator | 2025-05-28 19:36:49.822577 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-28 19:36:49.822585 | orchestrator | Wednesday 28 May 2025 19:36:10 +0000 (0:00:00.147) 0:02:44.889 ********* 2025-05-28 19:36:49.822593 | orchestrator | 2025-05-28 19:36:49.822601 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-28 19:36:49.822615 | orchestrator | Wednesday 28 May 2025 19:36:10 +0000 (0:00:00.095) 0:02:44.984 ********* 2025-05-28 19:36:49.822623 | orchestrator | 2025-05-28 19:36:49.822631 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-28 19:36:49.822639 | orchestrator | Wednesday 28 May 2025 19:36:10 +0000 (0:00:00.269) 0:02:45.254 ********* 2025-05-28 19:36:49.822647 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:36:49.822655 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:36:49.822663 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:36:49.822671 | orchestrator | 2025-05-28 19:36:49.822679 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:36:49.822687 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-28 19:36:49.822700 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 19:36:49.822708 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-28 19:36:49.822716 | orchestrator | 2025-05-28 19:36:49.822724 | orchestrator | 2025-05-28 19:36:49.822733 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:36:49.822741 | orchestrator | Wednesday 28 May 2025 19:36:47 +0000 (0:00:36.055) 0:03:21.310 ********* 2025-05-28 19:36:49.822749 | orchestrator | =============================================================================== 2025-05-28 19:36:49.822757 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.06s 2025-05-28 19:36:49.822765 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.34s 2025-05-28 19:36:49.822773 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 13.10s 2025-05-28 19:36:49.822781 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 11.72s 2025-05-28 19:36:49.822789 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 10.27s 2025-05-28 19:36:49.822797 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 8.03s 2025-05-28 19:36:49.822805 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.80s 2025-05-28 19:36:49.822813 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.58s 2025-05-28 19:36:49.822821 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.90s 2025-05-28 19:36:49.822829 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.42s 2025-05-28 19:36:49.822836 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.37s 2025-05-28 19:36:49.822844 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.35s 2025-05-28 19:36:49.822852 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.62s 2025-05-28 19:36:49.822860 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.58s 2025-05-28 19:36:49.822868 | orchestrator | glance : Check glance containers ---------------------------------------- 4.58s 2025-05-28 19:36:49.822876 | orchestrator | glance : Copying over config.json files for services -------------------- 4.12s 2025-05-28 19:36:49.822884 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.06s 2025-05-28 19:36:49.822892 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.05s 2025-05-28 19:36:49.822900 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.35s 2025-05-28 19:36:49.822913 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.28s 2025-05-28 19:36:49.822921 | orchestrator | 2025-05-28 19:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:52.854952 | orchestrator | 2025-05-28 19:36:52 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:52.855118 | orchestrator | 2025-05-28 19:36:52 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:36:52.856333 | orchestrator | 2025-05-28 19:36:52 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:36:52.857205 | orchestrator | 2025-05-28 19:36:52 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:36:52.857658 | orchestrator | 2025-05-28 19:36:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:52.858223 | orchestrator | 2025-05-28 19:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:55.903797 | orchestrator | 2025-05-28 19:36:55 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:55.906736 | orchestrator | 2025-05-28 19:36:55 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:36:55.908877 | orchestrator | 2025-05-28 19:36:55 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:36:55.910270 | orchestrator | 2025-05-28 19:36:55 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:36:55.913657 | orchestrator | 2025-05-28 19:36:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:55.913985 | orchestrator | 2025-05-28 19:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:36:58.967581 | orchestrator | 2025-05-28 19:36:58 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:36:58.968700 | orchestrator | 2025-05-28 19:36:58 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:36:58.969874 | orchestrator | 2025-05-28 19:36:58 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:36:58.971546 | orchestrator | 2025-05-28 19:36:58 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:36:58.972975 | orchestrator | 2025-05-28 19:36:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:36:58.972999 | orchestrator | 2025-05-28 19:36:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:02.027560 | orchestrator | 2025-05-28 19:37:02 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:02.029940 | orchestrator | 2025-05-28 19:37:02 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:02.032249 | orchestrator | 2025-05-28 19:37:02 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:02.033832 | orchestrator | 2025-05-28 19:37:02 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:02.034885 | orchestrator | 2025-05-28 19:37:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:02.034910 | orchestrator | 2025-05-28 19:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:05.070408 | orchestrator | 2025-05-28 19:37:05 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:05.071772 | orchestrator | 2025-05-28 19:37:05 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:05.073722 | orchestrator | 2025-05-28 19:37:05 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:05.075341 | orchestrator | 2025-05-28 19:37:05 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:05.078657 | orchestrator | 2025-05-28 19:37:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:05.079025 | orchestrator | 2025-05-28 19:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:08.125559 | orchestrator | 2025-05-28 19:37:08 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:08.126634 | orchestrator | 2025-05-28 19:37:08 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:08.128269 | orchestrator | 2025-05-28 19:37:08 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:08.129486 | orchestrator | 2025-05-28 19:37:08 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:08.132031 | orchestrator | 2025-05-28 19:37:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:08.132084 | orchestrator | 2025-05-28 19:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:11.172872 | orchestrator | 2025-05-28 19:37:11 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:11.175652 | orchestrator | 2025-05-28 19:37:11 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:11.177394 | orchestrator | 2025-05-28 19:37:11 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:11.179557 | orchestrator | 2025-05-28 19:37:11 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:11.180653 | orchestrator | 2025-05-28 19:37:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:11.180680 | orchestrator | 2025-05-28 19:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:14.226524 | orchestrator | 2025-05-28 19:37:14 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:14.227352 | orchestrator | 2025-05-28 19:37:14 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:14.228507 | orchestrator | 2025-05-28 19:37:14 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:14.231685 | orchestrator | 2025-05-28 19:37:14 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:14.232665 | orchestrator | 2025-05-28 19:37:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:14.232969 | orchestrator | 2025-05-28 19:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:17.278708 | orchestrator | 2025-05-28 19:37:17 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:17.279913 | orchestrator | 2025-05-28 19:37:17 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:17.281574 | orchestrator | 2025-05-28 19:37:17 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:17.283203 | orchestrator | 2025-05-28 19:37:17 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:17.284510 | orchestrator | 2025-05-28 19:37:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:17.284658 | orchestrator | 2025-05-28 19:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:20.335357 | orchestrator | 2025-05-28 19:37:20 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:20.338091 | orchestrator | 2025-05-28 19:37:20 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:20.339055 | orchestrator | 2025-05-28 19:37:20 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:20.340167 | orchestrator | 2025-05-28 19:37:20 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:20.341576 | orchestrator | 2025-05-28 19:37:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:20.341605 | orchestrator | 2025-05-28 19:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:23.386408 | orchestrator | 2025-05-28 19:37:23 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:23.388630 | orchestrator | 2025-05-28 19:37:23 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:23.390075 | orchestrator | 2025-05-28 19:37:23 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:23.391355 | orchestrator | 2025-05-28 19:37:23 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:23.393588 | orchestrator | 2025-05-28 19:37:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:23.393617 | orchestrator | 2025-05-28 19:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:26.439594 | orchestrator | 2025-05-28 19:37:26 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:26.441472 | orchestrator | 2025-05-28 19:37:26 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:26.443565 | orchestrator | 2025-05-28 19:37:26 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:26.445075 | orchestrator | 2025-05-28 19:37:26 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:26.446794 | orchestrator | 2025-05-28 19:37:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:26.446822 | orchestrator | 2025-05-28 19:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:29.498079 | orchestrator | 2025-05-28 19:37:29 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:29.498774 | orchestrator | 2025-05-28 19:37:29 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:29.501277 | orchestrator | 2025-05-28 19:37:29 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:29.503482 | orchestrator | 2025-05-28 19:37:29 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:29.507621 | orchestrator | 2025-05-28 19:37:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:29.507711 | orchestrator | 2025-05-28 19:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:32.555804 | orchestrator | 2025-05-28 19:37:32 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:32.556332 | orchestrator | 2025-05-28 19:37:32 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:32.558519 | orchestrator | 2025-05-28 19:37:32 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:32.560507 | orchestrator | 2025-05-28 19:37:32 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:32.562611 | orchestrator | 2025-05-28 19:37:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:32.562676 | orchestrator | 2025-05-28 19:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:35.602814 | orchestrator | 2025-05-28 19:37:35 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:35.606105 | orchestrator | 2025-05-28 19:37:35 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:35.608254 | orchestrator | 2025-05-28 19:37:35 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:35.609874 | orchestrator | 2025-05-28 19:37:35 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:35.611501 | orchestrator | 2025-05-28 19:37:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:35.611528 | orchestrator | 2025-05-28 19:37:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:38.669028 | orchestrator | 2025-05-28 19:37:38 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:38.670343 | orchestrator | 2025-05-28 19:37:38 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:38.673807 | orchestrator | 2025-05-28 19:37:38 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:38.676178 | orchestrator | 2025-05-28 19:37:38 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:38.677580 | orchestrator | 2025-05-28 19:37:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:38.677623 | orchestrator | 2025-05-28 19:37:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:41.724575 | orchestrator | 2025-05-28 19:37:41 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:41.726264 | orchestrator | 2025-05-28 19:37:41 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:41.727974 | orchestrator | 2025-05-28 19:37:41 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state STARTED 2025-05-28 19:37:41.729635 | orchestrator | 2025-05-28 19:37:41 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:41.731093 | orchestrator | 2025-05-28 19:37:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:41.731199 | orchestrator | 2025-05-28 19:37:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:44.778942 | orchestrator | 2025-05-28 19:37:44 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:44.781126 | orchestrator | 2025-05-28 19:37:44 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:44.782177 | orchestrator | 2025-05-28 19:37:44 | INFO  | Task 7bf1cf2a-4856-46a5-beae-ff20f3091d2c is in state SUCCESS 2025-05-28 19:37:44.787097 | orchestrator | 2025-05-28 19:37:44 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:44.787141 | orchestrator | 2025-05-28 19:37:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:44.787154 | orchestrator | 2025-05-28 19:37:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:47.850993 | orchestrator | 2025-05-28 19:37:47 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:47.852655 | orchestrator | 2025-05-28 19:37:47 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:47.855251 | orchestrator | 2025-05-28 19:37:47 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:47.856788 | orchestrator | 2025-05-28 19:37:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:47.857238 | orchestrator | 2025-05-28 19:37:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:50.948509 | orchestrator | 2025-05-28 19:37:50 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:50.948657 | orchestrator | 2025-05-28 19:37:50 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:50.949005 | orchestrator | 2025-05-28 19:37:50 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:50.949647 | orchestrator | 2025-05-28 19:37:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:50.949764 | orchestrator | 2025-05-28 19:37:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:53.994884 | orchestrator | 2025-05-28 19:37:53 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:53.996217 | orchestrator | 2025-05-28 19:37:53 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:53.997683 | orchestrator | 2025-05-28 19:37:53 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:53.999021 | orchestrator | 2025-05-28 19:37:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:53.999060 | orchestrator | 2025-05-28 19:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:37:57.052232 | orchestrator | 2025-05-28 19:37:57 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:37:57.052338 | orchestrator | 2025-05-28 19:37:57 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:37:57.053077 | orchestrator | 2025-05-28 19:37:57 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:37:57.053292 | orchestrator | 2025-05-28 19:37:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:37:57.053313 | orchestrator | 2025-05-28 19:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:00.097243 | orchestrator | 2025-05-28 19:38:00 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:00.098700 | orchestrator | 2025-05-28 19:38:00 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:00.100073 | orchestrator | 2025-05-28 19:38:00 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:00.101559 | orchestrator | 2025-05-28 19:38:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:00.101610 | orchestrator | 2025-05-28 19:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:03.154276 | orchestrator | 2025-05-28 19:38:03 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:03.155756 | orchestrator | 2025-05-28 19:38:03 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:03.158296 | orchestrator | 2025-05-28 19:38:03 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:03.160595 | orchestrator | 2025-05-28 19:38:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:03.161225 | orchestrator | 2025-05-28 19:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:06.205036 | orchestrator | 2025-05-28 19:38:06 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:06.206230 | orchestrator | 2025-05-28 19:38:06 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:06.207375 | orchestrator | 2025-05-28 19:38:06 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:06.209672 | orchestrator | 2025-05-28 19:38:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:06.209746 | orchestrator | 2025-05-28 19:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:09.258986 | orchestrator | 2025-05-28 19:38:09 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:09.259976 | orchestrator | 2025-05-28 19:38:09 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:09.261596 | orchestrator | 2025-05-28 19:38:09 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:09.262842 | orchestrator | 2025-05-28 19:38:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:09.262870 | orchestrator | 2025-05-28 19:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:12.317048 | orchestrator | 2025-05-28 19:38:12 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:12.318210 | orchestrator | 2025-05-28 19:38:12 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:12.321646 | orchestrator | 2025-05-28 19:38:12 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:12.323665 | orchestrator | 2025-05-28 19:38:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:12.324263 | orchestrator | 2025-05-28 19:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:15.372759 | orchestrator | 2025-05-28 19:38:15 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:15.374089 | orchestrator | 2025-05-28 19:38:15 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:15.376021 | orchestrator | 2025-05-28 19:38:15 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:15.378072 | orchestrator | 2025-05-28 19:38:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:15.378091 | orchestrator | 2025-05-28 19:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:18.427386 | orchestrator | 2025-05-28 19:38:18 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:18.427523 | orchestrator | 2025-05-28 19:38:18 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:18.429115 | orchestrator | 2025-05-28 19:38:18 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:18.431086 | orchestrator | 2025-05-28 19:38:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:18.431166 | orchestrator | 2025-05-28 19:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:21.478952 | orchestrator | 2025-05-28 19:38:21 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:21.480139 | orchestrator | 2025-05-28 19:38:21 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:21.482386 | orchestrator | 2025-05-28 19:38:21 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:21.484405 | orchestrator | 2025-05-28 19:38:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:21.484432 | orchestrator | 2025-05-28 19:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:24.538222 | orchestrator | 2025-05-28 19:38:24 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:24.539292 | orchestrator | 2025-05-28 19:38:24 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state STARTED 2025-05-28 19:38:24.542944 | orchestrator | 2025-05-28 19:38:24 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:24.545289 | orchestrator | 2025-05-28 19:38:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:24.545331 | orchestrator | 2025-05-28 19:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:27.593387 | orchestrator | 2025-05-28 19:38:27 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:27.595503 | orchestrator | 2025-05-28 19:38:27 | INFO  | Task 92a174f9-6eee-46aa-a4f2-6800444452e7 is in state SUCCESS 2025-05-28 19:38:27.596703 | orchestrator | 2025-05-28 19:38:27 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:27.599438 | orchestrator | 2025-05-28 19:38:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:27.599481 | orchestrator | 2025-05-28 19:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:30.650736 | orchestrator | 2025-05-28 19:38:30 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:30.652283 | orchestrator | 2025-05-28 19:38:30 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:30.654731 | orchestrator | 2025-05-28 19:38:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:30.654772 | orchestrator | 2025-05-28 19:38:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:33.701642 | orchestrator | 2025-05-28 19:38:33 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:33.704277 | orchestrator | 2025-05-28 19:38:33 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:33.706207 | orchestrator | 2025-05-28 19:38:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:33.706815 | orchestrator | 2025-05-28 19:38:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:36.769279 | orchestrator | 2025-05-28 19:38:36 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:36.772830 | orchestrator | 2025-05-28 19:38:36 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:36.773157 | orchestrator | 2025-05-28 19:38:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:36.773182 | orchestrator | 2025-05-28 19:38:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:39.823679 | orchestrator | 2025-05-28 19:38:39 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:39.825153 | orchestrator | 2025-05-28 19:38:39 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state STARTED 2025-05-28 19:38:39.826871 | orchestrator | 2025-05-28 19:38:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:39.827175 | orchestrator | 2025-05-28 19:38:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:42.864506 | orchestrator | 2025-05-28 19:38:42 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:42.865193 | orchestrator | 2025-05-28 19:38:42 | INFO  | Task 5fc6d5fd-4e92-4760-9078-93103ebb31bf is in state SUCCESS 2025-05-28 19:38:42.867058 | orchestrator | 2025-05-28 19:38:42.867111 | orchestrator | 2025-05-28 19:38:42.867132 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:38:42.867158 | orchestrator | 2025-05-28 19:38:42.867181 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:38:42.867786 | orchestrator | Wednesday 28 May 2025 19:36:50 +0000 (0:00:00.374) 0:00:00.374 ********* 2025-05-28 19:38:42.867838 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:38:42.867852 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:38:42.867864 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:38:42.867881 | orchestrator | 2025-05-28 19:38:42.867900 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:38:42.867919 | orchestrator | Wednesday 28 May 2025 19:36:50 +0000 (0:00:00.421) 0:00:00.796 ********* 2025-05-28 19:38:42.867939 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-28 19:38:42.867958 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-28 19:38:42.867976 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-28 19:38:42.868007 | orchestrator | 2025-05-28 19:38:42.868105 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-28 19:38:42.868122 | orchestrator | 2025-05-28 19:38:42.868172 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 19:38:42.868184 | orchestrator | Wednesday 28 May 2025 19:36:51 +0000 (0:00:00.332) 0:00:01.128 ********* 2025-05-28 19:38:42.868195 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:38:42.868208 | orchestrator | 2025-05-28 19:38:42.868219 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-28 19:38:42.868230 | orchestrator | Wednesday 28 May 2025 19:36:51 +0000 (0:00:00.800) 0:00:01.928 ********* 2025-05-28 19:38:42.868242 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-28 19:38:42.868253 | orchestrator | 2025-05-28 19:38:42.868264 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-28 19:38:42.868275 | orchestrator | Wednesday 28 May 2025 19:36:55 +0000 (0:00:03.231) 0:00:05.160 ********* 2025-05-28 19:38:42.868286 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-28 19:38:42.868298 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-28 19:38:42.868309 | orchestrator | 2025-05-28 19:38:42.868319 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-28 19:38:42.868330 | orchestrator | Wednesday 28 May 2025 19:37:01 +0000 (0:00:06.343) 0:00:11.504 ********* 2025-05-28 19:38:42.868341 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:38:42.868353 | orchestrator | 2025-05-28 19:38:42.868363 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-28 19:38:42.868374 | orchestrator | Wednesday 28 May 2025 19:37:04 +0000 (0:00:03.377) 0:00:14.881 ********* 2025-05-28 19:38:42.868385 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:38:42.868397 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-28 19:38:42.868408 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-28 19:38:42.868419 | orchestrator | 2025-05-28 19:38:42.868430 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-28 19:38:42.868441 | orchestrator | Wednesday 28 May 2025 19:37:13 +0000 (0:00:08.222) 0:00:23.103 ********* 2025-05-28 19:38:42.868452 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:38:42.868487 | orchestrator | 2025-05-28 19:38:42.868499 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-28 19:38:42.868510 | orchestrator | Wednesday 28 May 2025 19:37:16 +0000 (0:00:03.280) 0:00:26.384 ********* 2025-05-28 19:38:42.868521 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-28 19:38:42.868532 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-28 19:38:42.868543 | orchestrator | 2025-05-28 19:38:42.868554 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-28 19:38:42.868564 | orchestrator | Wednesday 28 May 2025 19:37:23 +0000 (0:00:07.642) 0:00:34.026 ********* 2025-05-28 19:38:42.868576 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-28 19:38:42.868599 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-28 19:38:42.868610 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-28 19:38:42.868621 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-28 19:38:42.868632 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-28 19:38:42.868643 | orchestrator | 2025-05-28 19:38:42.868654 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-28 19:38:42.868664 | orchestrator | Wednesday 28 May 2025 19:37:39 +0000 (0:00:15.871) 0:00:49.897 ********* 2025-05-28 19:38:42.868675 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:38:42.868686 | orchestrator | 2025-05-28 19:38:42.868697 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-28 19:38:42.868708 | orchestrator | Wednesday 28 May 2025 19:37:40 +0000 (0:00:00.774) 0:00:50.672 ********* 2025-05-28 19:38:42.868750 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-28 19:38:42.868767 | orchestrator | 2025-05-28 19:38:42.868778 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:38:42.868791 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-28 19:38:42.868803 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:38:42.868815 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:38:42.868826 | orchestrator | 2025-05-28 19:38:42.868837 | orchestrator | 2025-05-28 19:38:42.868848 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:38:42.868859 | orchestrator | Wednesday 28 May 2025 19:37:43 +0000 (0:00:03.183) 0:00:53.855 ********* 2025-05-28 19:38:42.868870 | orchestrator | =============================================================================== 2025-05-28 19:38:42.868880 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.87s 2025-05-28 19:38:42.868891 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.22s 2025-05-28 19:38:42.868902 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.64s 2025-05-28 19:38:42.868913 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.34s 2025-05-28 19:38:42.868924 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.38s 2025-05-28 19:38:42.868935 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.28s 2025-05-28 19:38:42.868946 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.23s 2025-05-28 19:38:42.868957 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.18s 2025-05-28 19:38:42.868967 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.80s 2025-05-28 19:38:42.868979 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.77s 2025-05-28 19:38:42.868990 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-05-28 19:38:42.869001 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2025-05-28 19:38:42.869011 | orchestrator | 2025-05-28 19:38:42.869022 | orchestrator | 2025-05-28 19:38:42.869033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:38:42.869052 | orchestrator | 2025-05-28 19:38:42.869071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:38:42.869089 | orchestrator | Wednesday 28 May 2025 19:36:44 +0000 (0:00:00.229) 0:00:00.229 ********* 2025-05-28 19:38:42.869107 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:38:42.869125 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:38:42.869143 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:38:42.869162 | orchestrator | 2025-05-28 19:38:42.869180 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:38:42.869199 | orchestrator | Wednesday 28 May 2025 19:36:44 +0000 (0:00:00.382) 0:00:00.612 ********* 2025-05-28 19:38:42.869211 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-28 19:38:42.869222 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-28 19:38:42.869234 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-28 19:38:42.869245 | orchestrator | 2025-05-28 19:38:42.869256 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-28 19:38:42.869267 | orchestrator | 2025-05-28 19:38:42.869278 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-28 19:38:42.869289 | orchestrator | Wednesday 28 May 2025 19:36:44 +0000 (0:00:00.467) 0:00:01.079 ********* 2025-05-28 19:38:42.869300 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:38:42.869311 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:38:42.869322 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:38:42.869333 | orchestrator | 2025-05-28 19:38:42.869344 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:38:42.869355 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:38:42.869366 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:38:42.869378 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:38:42.869389 | orchestrator | 2025-05-28 19:38:42.869400 | orchestrator | 2025-05-28 19:38:42.869411 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:38:42.869422 | orchestrator | Wednesday 28 May 2025 19:38:26 +0000 (0:01:41.871) 0:01:42.950 ********* 2025-05-28 19:38:42.869433 | orchestrator | =============================================================================== 2025-05-28 19:38:42.869443 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 101.87s 2025-05-28 19:38:42.869454 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-05-28 19:38:42.869498 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-05-28 19:38:42.869510 | orchestrator | 2025-05-28 19:38:42.869521 | orchestrator | 2025-05-28 19:38:42.869538 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:38:42.869549 | orchestrator | 2025-05-28 19:38:42.869560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:38:42.869580 | orchestrator | Wednesday 28 May 2025 19:36:50 +0000 (0:00:00.293) 0:00:00.293 ********* 2025-05-28 19:38:42.869592 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:38:42.869603 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:38:42.869614 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:38:42.869624 | orchestrator | 2025-05-28 19:38:42.869635 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:38:42.869646 | orchestrator | Wednesday 28 May 2025 19:36:50 +0000 (0:00:00.383) 0:00:00.677 ********* 2025-05-28 19:38:42.869657 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-28 19:38:42.869668 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-28 19:38:42.869680 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-28 19:38:42.869690 | orchestrator | 2025-05-28 19:38:42.869710 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-28 19:38:42.869721 | orchestrator | 2025-05-28 19:38:42.869732 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-28 19:38:42.869743 | orchestrator | Wednesday 28 May 2025 19:36:50 +0000 (0:00:00.290) 0:00:00.968 ********* 2025-05-28 19:38:42.869754 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:38:42.869765 | orchestrator | 2025-05-28 19:38:42.869776 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-28 19:38:42.869787 | orchestrator | Wednesday 28 May 2025 19:36:51 +0000 (0:00:00.924) 0:00:01.892 ********* 2025-05-28 19:38:42.869800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.869814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.869826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.869837 | orchestrator | 2025-05-28 19:38:42.869848 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-28 19:38:42.869859 | orchestrator | Wednesday 28 May 2025 19:36:52 +0000 (0:00:00.818) 0:00:02.711 ********* 2025-05-28 19:38:42.869870 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-28 19:38:42.869881 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-28 19:38:42.869892 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:38:42.869903 | orchestrator | 2025-05-28 19:38:42.869914 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-28 19:38:42.869925 | orchestrator | Wednesday 28 May 2025 19:36:53 +0000 (0:00:00.632) 0:00:03.344 ********* 2025-05-28 19:38:42.869936 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:38:42.869947 | orchestrator | 2025-05-28 19:38:42.869958 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-28 19:38:42.869969 | orchestrator | Wednesday 28 May 2025 19:36:53 +0000 (0:00:00.610) 0:00:03.954 ********* 2025-05-28 19:38:42.870001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870089 | orchestrator | 2025-05-28 19:38:42.870100 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-28 19:38:42.870112 | orchestrator | Wednesday 28 May 2025 19:36:55 +0000 (0:00:01.381) 0:00:05.336 ********* 2025-05-28 19:38:42.870123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:38:42.870135 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:38:42.870147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:38:42.870158 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:38:42.870184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:38:42.870205 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:38:42.870216 | orchestrator | 2025-05-28 19:38:42.870227 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-28 19:38:42.870238 | orchestrator | Wednesday 28 May 2025 19:36:55 +0000 (0:00:00.649) 0:00:05.985 ********* 2025-05-28 19:38:42.870249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:38:42.870261 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:38:42.870272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:38:42.870283 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:38:42.870295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-28 19:38:42.870306 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:38:42.870317 | orchestrator | 2025-05-28 19:38:42.870329 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-28 19:38:42.870339 | orchestrator | Wednesday 28 May 2025 19:36:56 +0000 (0:00:00.658) 0:00:06.644 ********* 2025-05-28 19:38:42.870351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870409 | orchestrator | 2025-05-28 19:38:42.870428 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-28 19:38:42.870447 | orchestrator | Wednesday 28 May 2025 19:36:57 +0000 (0:00:01.371) 0:00:08.015 ********* 2025-05-28 19:38:42.870553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.870618 | orchestrator | 2025-05-28 19:38:42.870630 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-28 19:38:42.870641 | orchestrator | Wednesday 28 May 2025 19:36:59 +0000 (0:00:01.595) 0:00:09.611 ********* 2025-05-28 19:38:42.870663 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:38:42.870674 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:38:42.870686 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:38:42.870697 | orchestrator | 2025-05-28 19:38:42.870708 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-28 19:38:42.870719 | orchestrator | Wednesday 28 May 2025 19:36:59 +0000 (0:00:00.288) 0:00:09.899 ********* 2025-05-28 19:38:42.870750 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-28 19:38:42.870762 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-28 19:38:42.870773 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-28 19:38:42.870784 | orchestrator | 2025-05-28 19:38:42.870795 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-28 19:38:42.870806 | orchestrator | Wednesday 28 May 2025 19:37:01 +0000 (0:00:01.394) 0:00:11.294 ********* 2025-05-28 19:38:42.870817 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-28 19:38:42.870828 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-28 19:38:42.870852 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-28 19:38:42.870864 | orchestrator | 2025-05-28 19:38:42.870884 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-28 19:38:42.870895 | orchestrator | Wednesday 28 May 2025 19:37:02 +0000 (0:00:01.363) 0:00:12.658 ********* 2025-05-28 19:38:42.870906 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:38:42.870917 | orchestrator | 2025-05-28 19:38:42.870928 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-28 19:38:42.870940 | orchestrator | Wednesday 28 May 2025 19:37:03 +0000 (0:00:00.395) 0:00:13.053 ********* 2025-05-28 19:38:42.870951 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-28 19:38:42.870962 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-28 19:38:42.870973 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:38:42.870985 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:38:42.870996 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:38:42.871007 | orchestrator | 2025-05-28 19:38:42.871018 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-28 19:38:42.871029 | orchestrator | Wednesday 28 May 2025 19:37:04 +0000 (0:00:01.102) 0:00:14.155 ********* 2025-05-28 19:38:42.871040 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:38:42.871051 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:38:42.871062 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:38:42.871071 | orchestrator | 2025-05-28 19:38:42.871084 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-28 19:38:42.871101 | orchestrator | Wednesday 28 May 2025 19:37:04 +0000 (0:00:00.435) 0:00:14.591 ********* 2025-05-28 19:38:42.871119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1117027, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5916984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1117027, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5916984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1117027, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5916984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1117009, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5866983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1117009, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5866983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1117009, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5866983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1117000, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5836983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1117000, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5836983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1117000, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5836983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1117017, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5876982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1117017, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5876982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1117017, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5876982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1116984, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5806983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1116984, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5806983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1116984, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5806983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1117001, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5846982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1117001, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5846982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1117001, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5846982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1117015, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5876982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1117015, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5876982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1117015, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5876982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1116980, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.579698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1116980, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.579698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1116980, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.579698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1116965, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.573698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1116965, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.573698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1116965, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.573698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1116991, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5816982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1116991, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5816982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1116991, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5816982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1116973, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.577698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1116973, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.577698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1116973, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.577698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1117012, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.5866983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1117012, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.5866983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1117012, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.5866983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1116994, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.582698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1116994, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.582698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1116994, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.582698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1117022, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5896983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1117022, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5896983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1117022, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5896983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1116977, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5786982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1116977, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5786982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1116977, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5786982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1117003, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5856984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1117003, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5856984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1117003, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5856984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.871985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1116967, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5756981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1116967, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5756981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1116967, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5756981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1116975, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5786982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1116975, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5786982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1116975, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5786982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1116998, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5836983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1116998, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5836983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1116998, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5836983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1117097, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6156988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1117097, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6156988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1117097, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6156988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1117090, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6066985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1117090, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6066985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1117090, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6066985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1117181, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.626699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1117181, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.626699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1117181, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.626699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1117038, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5926983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1117038, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5926983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1117038, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5926983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1117190, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.629699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1117190, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.629699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1117190, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.629699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1117133, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6166987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1117133, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6166987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1117133, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6166987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1117139, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.622699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1117139, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.622699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1117139, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.622699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1117042, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5926983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1117042, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5926983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1117042, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5926983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1117094, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6066985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1117094, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6066985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1117094, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6066985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1117196, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.629699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1117196, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.629699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1117196, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.629699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1117167, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6246989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1117167, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6246989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1117167, 'dev': 169, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748457779.6246989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1117054, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5976985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1117054, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5976985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1117054, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5976985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1117048, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5946984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1117048, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5946984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1117048, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5946984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1117065, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5996985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1117065, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5996985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1117065, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.5996985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1117069, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6056986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1117069, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6056986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1117069, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.6056986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1117203, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.631699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1117203, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.631699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1117203, 'dev': 169, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748457779.631699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-28 19:38:42.872839 | orchestrator | 2025-05-28 19:38:42.872850 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-28 19:38:42.872860 | orchestrator | Wednesday 28 May 2025 19:37:37 +0000 (0:00:33.023) 0:00:47.614 ********* 2025-05-28 19:38:42.872940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.872953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.872964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-28 19:38:42.872974 | orchestrator | 2025-05-28 19:38:42.872984 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-28 19:38:42.872994 | orchestrator | Wednesday 28 May 2025 19:37:38 +0000 (0:00:01.017) 0:00:48.632 ********* 2025-05-28 19:38:42.873004 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:38:42.873014 | orchestrator | 2025-05-28 19:38:42.873024 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-28 19:38:42.873033 | orchestrator | Wednesday 28 May 2025 19:37:41 +0000 (0:00:02.469) 0:00:51.102 ********* 2025-05-28 19:38:42.873043 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:38:42.873052 | orchestrator | 2025-05-28 19:38:42.873062 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-28 19:38:42.873072 | orchestrator | Wednesday 28 May 2025 19:37:43 +0000 (0:00:02.367) 0:00:53.469 ********* 2025-05-28 19:38:42.873081 | orchestrator | 2025-05-28 19:38:42.873097 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-28 19:38:42.873107 | orchestrator | Wednesday 28 May 2025 19:37:43 +0000 (0:00:00.063) 0:00:53.532 ********* 2025-05-28 19:38:42.873117 | orchestrator | 2025-05-28 19:38:42.873126 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-28 19:38:42.873135 | orchestrator | Wednesday 28 May 2025 19:37:43 +0000 (0:00:00.051) 0:00:53.584 ********* 2025-05-28 19:38:42.873145 | orchestrator | 2025-05-28 19:38:42.873155 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-28 19:38:42.873164 | orchestrator | Wednesday 28 May 2025 19:37:43 +0000 (0:00:00.196) 0:00:53.780 ********* 2025-05-28 19:38:42.873174 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:38:42.873183 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:38:42.873193 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:38:42.873203 | orchestrator | 2025-05-28 19:38:42.873212 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-28 19:38:42.873222 | orchestrator | Wednesday 28 May 2025 19:37:50 +0000 (0:00:06.904) 0:01:00.685 ********* 2025-05-28 19:38:42.873231 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:38:42.873241 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:38:42.873251 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-28 19:38:42.873261 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-28 19:38:42.873270 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:38:42.873281 | orchestrator | 2025-05-28 19:38:42.873290 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-28 19:38:42.873300 | orchestrator | Wednesday 28 May 2025 19:38:17 +0000 (0:00:26.749) 0:01:27.435 ********* 2025-05-28 19:38:42.873309 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:38:42.873320 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:38:42.873338 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:38:42.873354 | orchestrator | 2025-05-28 19:38:42.873372 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-28 19:38:42.873389 | orchestrator | Wednesday 28 May 2025 19:38:35 +0000 (0:00:18.379) 0:01:45.814 ********* 2025-05-28 19:38:42.873405 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:38:42.873421 | orchestrator | 2025-05-28 19:38:42.873444 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-28 19:38:42.873528 | orchestrator | Wednesday 28 May 2025 19:38:37 +0000 (0:00:02.126) 0:01:47.940 ********* 2025-05-28 19:38:42.873548 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:38:42.873572 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:38:42.873589 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:38:42.873607 | orchestrator | 2025-05-28 19:38:42.873622 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-28 19:38:42.873639 | orchestrator | Wednesday 28 May 2025 19:38:38 +0000 (0:00:00.456) 0:01:48.397 ********* 2025-05-28 19:38:42.873650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-28 19:38:42.873662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-28 19:38:42.873673 | orchestrator | 2025-05-28 19:38:42.873683 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-28 19:38:42.873693 | orchestrator | Wednesday 28 May 2025 19:38:40 +0000 (0:00:02.381) 0:01:50.778 ********* 2025-05-28 19:38:42.873702 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:38:42.873722 | orchestrator | 2025-05-28 19:38:42.873732 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:38:42.873742 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:38:42.873753 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:38:42.873763 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-28 19:38:42.873773 | orchestrator | 2025-05-28 19:38:42.873783 | orchestrator | 2025-05-28 19:38:42.873793 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:38:42.873802 | orchestrator | Wednesday 28 May 2025 19:38:41 +0000 (0:00:00.443) 0:01:51.222 ********* 2025-05-28 19:38:42.873812 | orchestrator | =============================================================================== 2025-05-28 19:38:42.873822 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.02s 2025-05-28 19:38:42.873832 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.75s 2025-05-28 19:38:42.873842 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 18.38s 2025-05-28 19:38:42.873851 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.90s 2025-05-28 19:38:42.873861 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2025-05-28 19:38:42.873871 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.38s 2025-05-28 19:38:42.873881 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.37s 2025-05-28 19:38:42.873890 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.13s 2025-05-28 19:38:42.873900 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.60s 2025-05-28 19:38:42.873910 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.39s 2025-05-28 19:38:42.873920 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.38s 2025-05-28 19:38:42.873930 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2025-05-28 19:38:42.873940 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.36s 2025-05-28 19:38:42.873949 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.10s 2025-05-28 19:38:42.873959 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2025-05-28 19:38:42.873969 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.92s 2025-05-28 19:38:42.873979 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-05-28 19:38:42.873988 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2025-05-28 19:38:42.873998 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.65s 2025-05-28 19:38:42.874008 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.63s 2025-05-28 19:38:42.874049 | orchestrator | 2025-05-28 19:38:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:42.874059 | orchestrator | 2025-05-28 19:38:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:45.911370 | orchestrator | 2025-05-28 19:38:45 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:45.915099 | orchestrator | 2025-05-28 19:38:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:45.915139 | orchestrator | 2025-05-28 19:38:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:48.958507 | orchestrator | 2025-05-28 19:38:48 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:48.958967 | orchestrator | 2025-05-28 19:38:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:48.959005 | orchestrator | 2025-05-28 19:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:52.000909 | orchestrator | 2025-05-28 19:38:51 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:52.001224 | orchestrator | 2025-05-28 19:38:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:52.001254 | orchestrator | 2025-05-28 19:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:55.039300 | orchestrator | 2025-05-28 19:38:55 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:55.039952 | orchestrator | 2025-05-28 19:38:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:55.039987 | orchestrator | 2025-05-28 19:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:38:58.090129 | orchestrator | 2025-05-28 19:38:58 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:38:58.092138 | orchestrator | 2025-05-28 19:38:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:38:58.092175 | orchestrator | 2025-05-28 19:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:01.138589 | orchestrator | 2025-05-28 19:39:01 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:01.140507 | orchestrator | 2025-05-28 19:39:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:01.140750 | orchestrator | 2025-05-28 19:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:04.185899 | orchestrator | 2025-05-28 19:39:04 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:04.187623 | orchestrator | 2025-05-28 19:39:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:04.187815 | orchestrator | 2025-05-28 19:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:07.230749 | orchestrator | 2025-05-28 19:39:07 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:07.230865 | orchestrator | 2025-05-28 19:39:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:07.230883 | orchestrator | 2025-05-28 19:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:10.278934 | orchestrator | 2025-05-28 19:39:10 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:10.280888 | orchestrator | 2025-05-28 19:39:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:10.280918 | orchestrator | 2025-05-28 19:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:13.341982 | orchestrator | 2025-05-28 19:39:13 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:13.342931 | orchestrator | 2025-05-28 19:39:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:13.342973 | orchestrator | 2025-05-28 19:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:16.384474 | orchestrator | 2025-05-28 19:39:16 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:16.384556 | orchestrator | 2025-05-28 19:39:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:16.384564 | orchestrator | 2025-05-28 19:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:19.426927 | orchestrator | 2025-05-28 19:39:19 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:19.427992 | orchestrator | 2025-05-28 19:39:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:19.428036 | orchestrator | 2025-05-28 19:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:22.480890 | orchestrator | 2025-05-28 19:39:22 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:22.481076 | orchestrator | 2025-05-28 19:39:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:22.481118 | orchestrator | 2025-05-28 19:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:25.525367 | orchestrator | 2025-05-28 19:39:25 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:25.525546 | orchestrator | 2025-05-28 19:39:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:25.525566 | orchestrator | 2025-05-28 19:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:28.592929 | orchestrator | 2025-05-28 19:39:28 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:28.593649 | orchestrator | 2025-05-28 19:39:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:28.593678 | orchestrator | 2025-05-28 19:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:31.637416 | orchestrator | 2025-05-28 19:39:31 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:31.638336 | orchestrator | 2025-05-28 19:39:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:31.638413 | orchestrator | 2025-05-28 19:39:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:34.681085 | orchestrator | 2025-05-28 19:39:34 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:34.684034 | orchestrator | 2025-05-28 19:39:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:34.684116 | orchestrator | 2025-05-28 19:39:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:37.753557 | orchestrator | 2025-05-28 19:39:37 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:37.753828 | orchestrator | 2025-05-28 19:39:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:37.753856 | orchestrator | 2025-05-28 19:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:40.809679 | orchestrator | 2025-05-28 19:39:40 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:40.809923 | orchestrator | 2025-05-28 19:39:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:40.809939 | orchestrator | 2025-05-28 19:39:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:43.845745 | orchestrator | 2025-05-28 19:39:43 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:43.846557 | orchestrator | 2025-05-28 19:39:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:43.846575 | orchestrator | 2025-05-28 19:39:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:46.896561 | orchestrator | 2025-05-28 19:39:46 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:46.899348 | orchestrator | 2025-05-28 19:39:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:46.899419 | orchestrator | 2025-05-28 19:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:49.939827 | orchestrator | 2025-05-28 19:39:49 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:49.944987 | orchestrator | 2025-05-28 19:39:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:49.945052 | orchestrator | 2025-05-28 19:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:52.991070 | orchestrator | 2025-05-28 19:39:52 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:52.991286 | orchestrator | 2025-05-28 19:39:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:52.991298 | orchestrator | 2025-05-28 19:39:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:56.046576 | orchestrator | 2025-05-28 19:39:56 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:56.046687 | orchestrator | 2025-05-28 19:39:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:56.046704 | orchestrator | 2025-05-28 19:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:39:59.102633 | orchestrator | 2025-05-28 19:39:59 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:39:59.103566 | orchestrator | 2025-05-28 19:39:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:39:59.103699 | orchestrator | 2025-05-28 19:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:02.167196 | orchestrator | 2025-05-28 19:40:02 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:02.168605 | orchestrator | 2025-05-28 19:40:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:02.168646 | orchestrator | 2025-05-28 19:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:05.231782 | orchestrator | 2025-05-28 19:40:05 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:05.232811 | orchestrator | 2025-05-28 19:40:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:05.232851 | orchestrator | 2025-05-28 19:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:08.279357 | orchestrator | 2025-05-28 19:40:08 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:08.280588 | orchestrator | 2025-05-28 19:40:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:08.280622 | orchestrator | 2025-05-28 19:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:11.333958 | orchestrator | 2025-05-28 19:40:11 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:11.335124 | orchestrator | 2025-05-28 19:40:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:11.335276 | orchestrator | 2025-05-28 19:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:14.381514 | orchestrator | 2025-05-28 19:40:14 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:14.381622 | orchestrator | 2025-05-28 19:40:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:14.381637 | orchestrator | 2025-05-28 19:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:17.447816 | orchestrator | 2025-05-28 19:40:17 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:17.448938 | orchestrator | 2025-05-28 19:40:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:17.448978 | orchestrator | 2025-05-28 19:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:20.498973 | orchestrator | 2025-05-28 19:40:20 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:20.499674 | orchestrator | 2025-05-28 19:40:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:20.499712 | orchestrator | 2025-05-28 19:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:23.545286 | orchestrator | 2025-05-28 19:40:23 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:23.546282 | orchestrator | 2025-05-28 19:40:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:23.546327 | orchestrator | 2025-05-28 19:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:26.591856 | orchestrator | 2025-05-28 19:40:26 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:26.592886 | orchestrator | 2025-05-28 19:40:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:26.592917 | orchestrator | 2025-05-28 19:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:29.633911 | orchestrator | 2025-05-28 19:40:29 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:29.633998 | orchestrator | 2025-05-28 19:40:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:29.634068 | orchestrator | 2025-05-28 19:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:32.687748 | orchestrator | 2025-05-28 19:40:32 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:32.687855 | orchestrator | 2025-05-28 19:40:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:32.687871 | orchestrator | 2025-05-28 19:40:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:35.737332 | orchestrator | 2025-05-28 19:40:35 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:35.737821 | orchestrator | 2025-05-28 19:40:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:35.737855 | orchestrator | 2025-05-28 19:40:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:38.787672 | orchestrator | 2025-05-28 19:40:38 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:38.788997 | orchestrator | 2025-05-28 19:40:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:38.789029 | orchestrator | 2025-05-28 19:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:41.833060 | orchestrator | 2025-05-28 19:40:41 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:41.834313 | orchestrator | 2025-05-28 19:40:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:41.834526 | orchestrator | 2025-05-28 19:40:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:44.878836 | orchestrator | 2025-05-28 19:40:44 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:44.880555 | orchestrator | 2025-05-28 19:40:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:44.880600 | orchestrator | 2025-05-28 19:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:47.933887 | orchestrator | 2025-05-28 19:40:47 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:47.935459 | orchestrator | 2025-05-28 19:40:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:47.935495 | orchestrator | 2025-05-28 19:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:50.979832 | orchestrator | 2025-05-28 19:40:50 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:50.981766 | orchestrator | 2025-05-28 19:40:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:50.982509 | orchestrator | 2025-05-28 19:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:54.053179 | orchestrator | 2025-05-28 19:40:54 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:54.053275 | orchestrator | 2025-05-28 19:40:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:54.053290 | orchestrator | 2025-05-28 19:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:40:57.101947 | orchestrator | 2025-05-28 19:40:57 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:40:57.108786 | orchestrator | 2025-05-28 19:40:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:40:57.108873 | orchestrator | 2025-05-28 19:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:00.145511 | orchestrator | 2025-05-28 19:41:00 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:00.146583 | orchestrator | 2025-05-28 19:41:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:00.146618 | orchestrator | 2025-05-28 19:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:03.193494 | orchestrator | 2025-05-28 19:41:03 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:03.195722 | orchestrator | 2025-05-28 19:41:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:03.195757 | orchestrator | 2025-05-28 19:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:06.242892 | orchestrator | 2025-05-28 19:41:06 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:06.244301 | orchestrator | 2025-05-28 19:41:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:06.244332 | orchestrator | 2025-05-28 19:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:09.293039 | orchestrator | 2025-05-28 19:41:09 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:09.295451 | orchestrator | 2025-05-28 19:41:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:09.295679 | orchestrator | 2025-05-28 19:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:12.345691 | orchestrator | 2025-05-28 19:41:12 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:12.348961 | orchestrator | 2025-05-28 19:41:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:12.349019 | orchestrator | 2025-05-28 19:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:15.402364 | orchestrator | 2025-05-28 19:41:15 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:15.403495 | orchestrator | 2025-05-28 19:41:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:15.403951 | orchestrator | 2025-05-28 19:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:18.451837 | orchestrator | 2025-05-28 19:41:18 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:18.451966 | orchestrator | 2025-05-28 19:41:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:18.451983 | orchestrator | 2025-05-28 19:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:21.489904 | orchestrator | 2025-05-28 19:41:21 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:21.491655 | orchestrator | 2025-05-28 19:41:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:21.491758 | orchestrator | 2025-05-28 19:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:24.547055 | orchestrator | 2025-05-28 19:41:24 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:24.549035 | orchestrator | 2025-05-28 19:41:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:24.549449 | orchestrator | 2025-05-28 19:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:27.594198 | orchestrator | 2025-05-28 19:41:27 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:27.595544 | orchestrator | 2025-05-28 19:41:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:27.595622 | orchestrator | 2025-05-28 19:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:30.646528 | orchestrator | 2025-05-28 19:41:30 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:30.647901 | orchestrator | 2025-05-28 19:41:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:30.648044 | orchestrator | 2025-05-28 19:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:33.696089 | orchestrator | 2025-05-28 19:41:33 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:33.697577 | orchestrator | 2025-05-28 19:41:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:33.697612 | orchestrator | 2025-05-28 19:41:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:36.743946 | orchestrator | 2025-05-28 19:41:36 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:36.745620 | orchestrator | 2025-05-28 19:41:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:36.745670 | orchestrator | 2025-05-28 19:41:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:39.792241 | orchestrator | 2025-05-28 19:41:39 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:39.792772 | orchestrator | 2025-05-28 19:41:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:39.792806 | orchestrator | 2025-05-28 19:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:42.843036 | orchestrator | 2025-05-28 19:41:42 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:42.844416 | orchestrator | 2025-05-28 19:41:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:42.844973 | orchestrator | 2025-05-28 19:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:45.900492 | orchestrator | 2025-05-28 19:41:45 | INFO  | Task bb8ca9f7-12fb-452d-9c86-fa3358706b8a is in state STARTED 2025-05-28 19:41:45.901435 | orchestrator | 2025-05-28 19:41:45 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:45.904019 | orchestrator | 2025-05-28 19:41:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:45.904048 | orchestrator | 2025-05-28 19:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:48.959889 | orchestrator | 2025-05-28 19:41:48 | INFO  | Task bb8ca9f7-12fb-452d-9c86-fa3358706b8a is in state STARTED 2025-05-28 19:41:48.960388 | orchestrator | 2025-05-28 19:41:48 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:48.962086 | orchestrator | 2025-05-28 19:41:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:48.962186 | orchestrator | 2025-05-28 19:41:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:52.017275 | orchestrator | 2025-05-28 19:41:52 | INFO  | Task bb8ca9f7-12fb-452d-9c86-fa3358706b8a is in state STARTED 2025-05-28 19:41:52.019015 | orchestrator | 2025-05-28 19:41:52 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:52.020436 | orchestrator | 2025-05-28 19:41:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:52.020802 | orchestrator | 2025-05-28 19:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:55.065951 | orchestrator | 2025-05-28 19:41:55 | INFO  | Task bb8ca9f7-12fb-452d-9c86-fa3358706b8a is in state STARTED 2025-05-28 19:41:55.066078 | orchestrator | 2025-05-28 19:41:55 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:55.066863 | orchestrator | 2025-05-28 19:41:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:55.066900 | orchestrator | 2025-05-28 19:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:41:58.110418 | orchestrator | 2025-05-28 19:41:58 | INFO  | Task bb8ca9f7-12fb-452d-9c86-fa3358706b8a is in state SUCCESS 2025-05-28 19:41:58.111906 | orchestrator | 2025-05-28 19:41:58 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:41:58.113169 | orchestrator | 2025-05-28 19:41:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:41:58.113411 | orchestrator | 2025-05-28 19:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:01.167394 | orchestrator | 2025-05-28 19:42:01 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:01.167489 | orchestrator | 2025-05-28 19:42:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:01.167508 | orchestrator | 2025-05-28 19:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:04.200208 | orchestrator | 2025-05-28 19:42:04 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:04.200267 | orchestrator | 2025-05-28 19:42:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:04.200275 | orchestrator | 2025-05-28 19:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:07.234900 | orchestrator | 2025-05-28 19:42:07 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:07.239484 | orchestrator | 2025-05-28 19:42:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:07.239539 | orchestrator | 2025-05-28 19:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:10.278526 | orchestrator | 2025-05-28 19:42:10 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:10.278667 | orchestrator | 2025-05-28 19:42:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:10.278682 | orchestrator | 2025-05-28 19:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:13.316940 | orchestrator | 2025-05-28 19:42:13 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:13.317043 | orchestrator | 2025-05-28 19:42:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:13.317059 | orchestrator | 2025-05-28 19:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:16.364788 | orchestrator | 2025-05-28 19:42:16 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:16.365725 | orchestrator | 2025-05-28 19:42:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:16.365763 | orchestrator | 2025-05-28 19:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:19.409640 | orchestrator | 2025-05-28 19:42:19 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:19.411101 | orchestrator | 2025-05-28 19:42:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:19.411196 | orchestrator | 2025-05-28 19:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:22.459708 | orchestrator | 2025-05-28 19:42:22 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:22.461403 | orchestrator | 2025-05-28 19:42:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:22.461466 | orchestrator | 2025-05-28 19:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:25.524425 | orchestrator | 2025-05-28 19:42:25 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:25.526568 | orchestrator | 2025-05-28 19:42:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:25.526603 | orchestrator | 2025-05-28 19:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:28.575495 | orchestrator | 2025-05-28 19:42:28 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:28.577448 | orchestrator | 2025-05-28 19:42:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:28.577480 | orchestrator | 2025-05-28 19:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:31.616810 | orchestrator | 2025-05-28 19:42:31 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:31.617185 | orchestrator | 2025-05-28 19:42:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:31.617214 | orchestrator | 2025-05-28 19:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:34.662387 | orchestrator | 2025-05-28 19:42:34 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:34.664237 | orchestrator | 2025-05-28 19:42:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:34.664505 | orchestrator | 2025-05-28 19:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:37.707267 | orchestrator | 2025-05-28 19:42:37 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:37.708272 | orchestrator | 2025-05-28 19:42:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:37.708589 | orchestrator | 2025-05-28 19:42:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:40.762836 | orchestrator | 2025-05-28 19:42:40 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state STARTED 2025-05-28 19:42:40.762975 | orchestrator | 2025-05-28 19:42:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:40.763004 | orchestrator | 2025-05-28 19:42:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:43.812586 | orchestrator | 2025-05-28 19:42:43.812673 | orchestrator | None 2025-05-28 19:42:43.812690 | orchestrator | 2025-05-28 19:42:43.812702 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-28 19:42:43.812714 | orchestrator | 2025-05-28 19:42:43.812725 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-28 19:42:43.812736 | orchestrator | Wednesday 28 May 2025 19:34:22 +0000 (0:00:00.378) 0:00:00.378 ********* 2025-05-28 19:42:43.812747 | orchestrator | changed: [testbed-manager] 2025-05-28 19:42:43.812760 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.812771 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.812782 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.812793 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.812804 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.812815 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.812826 | orchestrator | 2025-05-28 19:42:43.812838 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-28 19:42:43.812849 | orchestrator | Wednesday 28 May 2025 19:34:24 +0000 (0:00:02.226) 0:00:02.604 ********* 2025-05-28 19:42:43.812860 | orchestrator | changed: [testbed-manager] 2025-05-28 19:42:43.812871 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.812882 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.812893 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.812904 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.812915 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.812926 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.812937 | orchestrator | 2025-05-28 19:42:43.812948 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-28 19:42:43.812959 | orchestrator | Wednesday 28 May 2025 19:34:25 +0000 (0:00:01.114) 0:00:03.721 ********* 2025-05-28 19:42:43.812971 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-28 19:42:43.812982 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-28 19:42:43.813174 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-28 19:42:43.813190 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-28 19:42:43.813203 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-28 19:42:43.813215 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-28 19:42:43.813227 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-28 19:42:43.813239 | orchestrator | 2025-05-28 19:42:43.813252 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-28 19:42:43.813264 | orchestrator | 2025-05-28 19:42:43.813276 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-28 19:42:43.813289 | orchestrator | Wednesday 28 May 2025 19:34:26 +0000 (0:00:01.045) 0:00:04.766 ********* 2025-05-28 19:42:43.813301 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:42:43.813336 | orchestrator | 2025-05-28 19:42:43.813348 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-28 19:42:43.813360 | orchestrator | Wednesday 28 May 2025 19:34:27 +0000 (0:00:00.476) 0:00:05.243 ********* 2025-05-28 19:42:43.813386 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-28 19:42:43.813399 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-28 19:42:43.813411 | orchestrator | 2025-05-28 19:42:43.813423 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-28 19:42:43.813436 | orchestrator | Wednesday 28 May 2025 19:34:31 +0000 (0:00:04.150) 0:00:09.393 ********* 2025-05-28 19:42:43.813469 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 19:42:43.813483 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-28 19:42:43.813495 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.813508 | orchestrator | 2025-05-28 19:42:43.813521 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-28 19:42:43.813532 | orchestrator | Wednesday 28 May 2025 19:34:36 +0000 (0:00:04.440) 0:00:13.833 ********* 2025-05-28 19:42:43.813543 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.813554 | orchestrator | 2025-05-28 19:42:43.813565 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-28 19:42:43.813576 | orchestrator | Wednesday 28 May 2025 19:34:36 +0000 (0:00:00.845) 0:00:14.679 ********* 2025-05-28 19:42:43.813587 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.813597 | orchestrator | 2025-05-28 19:42:43.813608 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-28 19:42:43.813619 | orchestrator | Wednesday 28 May 2025 19:34:38 +0000 (0:00:01.593) 0:00:16.272 ********* 2025-05-28 19:42:43.813630 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.813641 | orchestrator | 2025-05-28 19:42:43.813652 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 19:42:43.813663 | orchestrator | Wednesday 28 May 2025 19:34:42 +0000 (0:00:03.995) 0:00:20.268 ********* 2025-05-28 19:42:43.813674 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.813685 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.813696 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.813707 | orchestrator | 2025-05-28 19:42:43.813718 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-28 19:42:43.813729 | orchestrator | Wednesday 28 May 2025 19:34:43 +0000 (0:00:00.627) 0:00:20.896 ********* 2025-05-28 19:42:43.813740 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.813751 | orchestrator | 2025-05-28 19:42:43.813762 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-28 19:42:43.813773 | orchestrator | Wednesday 28 May 2025 19:35:14 +0000 (0:00:30.932) 0:00:51.828 ********* 2025-05-28 19:42:43.813784 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.813795 | orchestrator | 2025-05-28 19:42:43.813806 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-28 19:42:43.813816 | orchestrator | Wednesday 28 May 2025 19:35:27 +0000 (0:00:13.766) 0:01:05.595 ********* 2025-05-28 19:42:43.813827 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.813838 | orchestrator | 2025-05-28 19:42:43.813849 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-28 19:42:43.813861 | orchestrator | Wednesday 28 May 2025 19:35:38 +0000 (0:00:10.531) 0:01:16.126 ********* 2025-05-28 19:42:43.813889 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.813901 | orchestrator | 2025-05-28 19:42:43.813912 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-28 19:42:43.813923 | orchestrator | Wednesday 28 May 2025 19:35:40 +0000 (0:00:01.709) 0:01:17.836 ********* 2025-05-28 19:42:43.813934 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.813945 | orchestrator | 2025-05-28 19:42:43.813956 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 19:42:43.813967 | orchestrator | Wednesday 28 May 2025 19:35:41 +0000 (0:00:01.776) 0:01:19.612 ********* 2025-05-28 19:42:43.813979 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:42:43.813990 | orchestrator | 2025-05-28 19:42:43.814001 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-28 19:42:43.814012 | orchestrator | Wednesday 28 May 2025 19:35:43 +0000 (0:00:01.832) 0:01:21.444 ********* 2025-05-28 19:42:43.814186 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.814198 | orchestrator | 2025-05-28 19:42:43.814209 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-28 19:42:43.814229 | orchestrator | Wednesday 28 May 2025 19:36:00 +0000 (0:00:17.103) 0:01:38.548 ********* 2025-05-28 19:42:43.814240 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.814251 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814263 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814274 | orchestrator | 2025-05-28 19:42:43.814285 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-28 19:42:43.814296 | orchestrator | 2025-05-28 19:42:43.814307 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-28 19:42:43.814337 | orchestrator | Wednesday 28 May 2025 19:36:01 +0000 (0:00:00.291) 0:01:38.839 ********* 2025-05-28 19:42:43.814348 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:42:43.814359 | orchestrator | 2025-05-28 19:42:43.814370 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-28 19:42:43.814381 | orchestrator | Wednesday 28 May 2025 19:36:01 +0000 (0:00:00.796) 0:01:39.635 ********* 2025-05-28 19:42:43.814392 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814404 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814415 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.814426 | orchestrator | 2025-05-28 19:42:43.814437 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-28 19:42:43.814448 | orchestrator | Wednesday 28 May 2025 19:36:04 +0000 (0:00:02.455) 0:01:42.091 ********* 2025-05-28 19:42:43.814459 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814470 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814481 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.814492 | orchestrator | 2025-05-28 19:42:43.814503 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-28 19:42:43.814514 | orchestrator | Wednesday 28 May 2025 19:36:06 +0000 (0:00:02.215) 0:01:44.307 ********* 2025-05-28 19:42:43.814525 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.814542 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814554 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814565 | orchestrator | 2025-05-28 19:42:43.814576 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-28 19:42:43.814587 | orchestrator | Wednesday 28 May 2025 19:36:06 +0000 (0:00:00.468) 0:01:44.775 ********* 2025-05-28 19:42:43.814598 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 19:42:43.814609 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814620 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 19:42:43.814631 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814642 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-28 19:42:43.814653 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-28 19:42:43.814665 | orchestrator | 2025-05-28 19:42:43.814676 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-28 19:42:43.814687 | orchestrator | Wednesday 28 May 2025 19:36:14 +0000 (0:00:07.890) 0:01:52.665 ********* 2025-05-28 19:42:43.814698 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.814709 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814720 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814731 | orchestrator | 2025-05-28 19:42:43.814742 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-28 19:42:43.814753 | orchestrator | Wednesday 28 May 2025 19:36:15 +0000 (0:00:00.558) 0:01:53.224 ********* 2025-05-28 19:42:43.814764 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-28 19:42:43.814775 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.814786 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-28 19:42:43.814797 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814808 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-28 19:42:43.814819 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814830 | orchestrator | 2025-05-28 19:42:43.814868 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-28 19:42:43.814880 | orchestrator | Wednesday 28 May 2025 19:36:16 +0000 (0:00:00.742) 0:01:53.966 ********* 2025-05-28 19:42:43.814891 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814902 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814913 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.814924 | orchestrator | 2025-05-28 19:42:43.814935 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-28 19:42:43.814946 | orchestrator | Wednesday 28 May 2025 19:36:16 +0000 (0:00:00.459) 0:01:54.426 ********* 2025-05-28 19:42:43.814957 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.814968 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.814979 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.814990 | orchestrator | 2025-05-28 19:42:43.815001 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-28 19:42:43.815012 | orchestrator | Wednesday 28 May 2025 19:36:17 +0000 (0:00:01.027) 0:01:55.454 ********* 2025-05-28 19:42:43.815023 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.815054 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.815066 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.815077 | orchestrator | 2025-05-28 19:42:43.815088 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-28 19:42:43.815100 | orchestrator | Wednesday 28 May 2025 19:36:20 +0000 (0:00:02.663) 0:01:58.117 ********* 2025-05-28 19:42:43.815110 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.815121 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.815133 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.815144 | orchestrator | 2025-05-28 19:42:43.815154 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-28 19:42:43.815165 | orchestrator | Wednesday 28 May 2025 19:36:40 +0000 (0:00:20.477) 0:02:18.595 ********* 2025-05-28 19:42:43.815176 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.815187 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.815198 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.815209 | orchestrator | 2025-05-28 19:42:43.815220 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-28 19:42:43.815231 | orchestrator | Wednesday 28 May 2025 19:36:50 +0000 (0:00:09.939) 0:02:28.534 ********* 2025-05-28 19:42:43.815242 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.815253 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.815264 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.815275 | orchestrator | 2025-05-28 19:42:43.815286 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-28 19:42:43.815297 | orchestrator | Wednesday 28 May 2025 19:36:52 +0000 (0:00:01.623) 0:02:30.158 ********* 2025-05-28 19:42:43.815307 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.815334 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.815346 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.815357 | orchestrator | 2025-05-28 19:42:43.815368 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-28 19:42:43.815379 | orchestrator | Wednesday 28 May 2025 19:37:02 +0000 (0:00:10.532) 0:02:40.691 ********* 2025-05-28 19:42:43.815390 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.815401 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.815412 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.815423 | orchestrator | 2025-05-28 19:42:43.815434 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-28 19:42:43.815445 | orchestrator | Wednesday 28 May 2025 19:37:04 +0000 (0:00:01.512) 0:02:42.203 ********* 2025-05-28 19:42:43.815587 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.815599 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.815610 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.815621 | orchestrator | 2025-05-28 19:42:43.815633 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-28 19:42:43.815652 | orchestrator | 2025-05-28 19:42:43.815664 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 19:42:43.815675 | orchestrator | Wednesday 28 May 2025 19:37:04 +0000 (0:00:00.502) 0:02:42.706 ********* 2025-05-28 19:42:43.815686 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:42:43.815698 | orchestrator | 2025-05-28 19:42:43.815714 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-28 19:42:43.815726 | orchestrator | Wednesday 28 May 2025 19:37:05 +0000 (0:00:00.718) 0:02:43.424 ********* 2025-05-28 19:42:43.815737 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-28 19:42:43.815747 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-28 19:42:43.815758 | orchestrator | 2025-05-28 19:42:43.815769 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-28 19:42:43.815780 | orchestrator | Wednesday 28 May 2025 19:37:08 +0000 (0:00:03.347) 0:02:46.772 ********* 2025-05-28 19:42:43.815791 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-28 19:42:43.815803 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-28 19:42:43.815814 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-28 19:42:43.815825 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-28 19:42:43.815836 | orchestrator | 2025-05-28 19:42:43.815847 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-28 19:42:43.815876 | orchestrator | Wednesday 28 May 2025 19:37:15 +0000 (0:00:06.534) 0:02:53.306 ********* 2025-05-28 19:42:43.815887 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-28 19:42:43.815898 | orchestrator | 2025-05-28 19:42:43.815909 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-28 19:42:43.815920 | orchestrator | Wednesday 28 May 2025 19:37:18 +0000 (0:00:03.160) 0:02:56.466 ********* 2025-05-28 19:42:43.815931 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-28 19:42:43.815942 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-28 19:42:43.815952 | orchestrator | 2025-05-28 19:42:43.815964 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-28 19:42:43.815974 | orchestrator | Wednesday 28 May 2025 19:37:22 +0000 (0:00:04.095) 0:03:00.562 ********* 2025-05-28 19:42:43.815985 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-28 19:42:43.815996 | orchestrator | 2025-05-28 19:42:43.816007 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-28 19:42:43.816018 | orchestrator | Wednesday 28 May 2025 19:37:26 +0000 (0:00:03.420) 0:03:03.983 ********* 2025-05-28 19:42:43.816029 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-28 19:42:43.816040 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-28 19:42:43.816050 | orchestrator | 2025-05-28 19:42:43.816061 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-28 19:42:43.816080 | orchestrator | Wednesday 28 May 2025 19:37:34 +0000 (0:00:08.122) 0:03:12.105 ********* 2025-05-28 19:42:43.816097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.816126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.816141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.816160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.816174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.816211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.816228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.816241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.816272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.816285 | orchestrator | 2025-05-28 19:42:43.816297 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-28 19:42:43.816308 | orchestrator | Wednesday 28 May 2025 19:37:35 +0000 (0:00:01.404) 0:03:13.510 ********* 2025-05-28 19:42:43.816377 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.816388 | orchestrator | 2025-05-28 19:42:43.816399 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-28 19:42:43.816411 | orchestrator | Wednesday 28 May 2025 19:37:35 +0000 (0:00:00.245) 0:03:13.755 ********* 2025-05-28 19:42:43.816422 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.816433 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.816444 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.816455 | orchestrator | 2025-05-28 19:42:43.816567 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-28 19:42:43.816589 | orchestrator | Wednesday 28 May 2025 19:37:36 +0000 (0:00:00.292) 0:03:14.048 ********* 2025-05-28 19:42:43.816607 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-28 19:42:43.816623 | orchestrator | 2025-05-28 19:42:43.816666 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-28 19:42:43.816688 | orchestrator | Wednesday 28 May 2025 19:37:36 +0000 (0:00:00.508) 0:03:14.556 ********* 2025-05-28 19:42:43.816709 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.816728 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.816747 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.816759 | orchestrator | 2025-05-28 19:42:43.816770 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-28 19:42:43.816781 | orchestrator | Wednesday 28 May 2025 19:37:37 +0000 (0:00:00.296) 0:03:14.853 ********* 2025-05-28 19:42:43.816792 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:42:43.816803 | orchestrator | 2025-05-28 19:42:43.816814 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-28 19:42:43.816825 | orchestrator | Wednesday 28 May 2025 19:37:37 +0000 (0:00:00.738) 0:03:15.591 ********* 2025-05-28 19:42:43.816838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.816858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.816880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.816901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.816913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.816934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.816946 | orchestrator | 2025-05-28 19:42:43.816957 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-28 19:42:43.816969 | orchestrator | Wednesday 28 May 2025 19:37:40 +0000 (0:00:02.546) 0:03:18.138 ********* 2025-05-28 19:42:43.816980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.816998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817016 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.817029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817053 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.817070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817100 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.817111 | orchestrator | 2025-05-28 19:42:43.817122 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-28 19:42:43.817133 | orchestrator | Wednesday 28 May 2025 19:37:40 +0000 (0:00:00.602) 0:03:18.740 ********* 2025-05-28 19:42:43.817152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/r2025-05-28 19:42:43 | INFO  | Task addbcc04-da37-4ab4-b908-1dfc97525e6d is in state SUCCESS 2025-05-28 19:42:43.817165 | orchestrator | elease/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817189 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.817206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817236 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.817256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817280 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.817292 | orchestrator | 2025-05-28 19:42:43.817303 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-28 19:42:43.817344 | orchestrator | Wednesday 28 May 2025 19:37:41 +0000 (0:00:01.047) 0:03:19.787 ********* 2025-05-28 19:42:43.817372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.817416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.817440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.817464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.817492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.817543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.817593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817616 | orchestrator | 2025-05-28 19:42:43.817635 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-28 19:42:43.817653 | orchestrator | Wednesday 28 May 2025 19:37:44 +0000 (0:00:02.560) 0:03:22.348 ********* 2025-05-28 19:42:43.817679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.817694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.817721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.817734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.817746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.817780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.817810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817822 | orchestrator | 2025-05-28 19:42:43.817834 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-28 19:42:43.817845 | orchestrator | Wednesday 28 May 2025 19:37:50 +0000 (0:00:05.816) 0:03:28.164 ********* 2025-05-28 19:42:43.817857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817907 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.817925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-28 19:42:43.817952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.817999 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.818010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.818056 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.818068 | orchestrator | 2025-05-28 19:42:43.818079 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-28 19:42:43.818090 | orchestrator | Wednesday 28 May 2025 19:37:51 +0000 (0:00:00.770) 0:03:28.935 ********* 2025-05-28 19:42:43.818102 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.818113 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.818124 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.818135 | orchestrator | 2025-05-28 19:42:43.818153 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-28 19:42:43.818165 | orchestrator | Wednesday 28 May 2025 19:37:52 +0000 (0:00:01.598) 0:03:30.534 ********* 2025-05-28 19:42:43.818176 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.818187 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.818198 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.818209 | orchestrator | 2025-05-28 19:42:43.818221 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-28 19:42:43.818232 | orchestrator | Wednesday 28 May 2025 19:37:53 +0000 (0:00:00.456) 0:03:30.991 ********* 2025-05-28 19:42:43.818244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.818268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.818288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-28 19:42:43.818302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.818396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.818416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.818432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.818445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.818456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.818468 | orchestrator | 2025-05-28 19:42:43.818479 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-28 19:42:43.818490 | orchestrator | Wednesday 28 May 2025 19:37:55 +0000 (0:00:01.915) 0:03:32.906 ********* 2025-05-28 19:42:43.818502 | orchestrator | 2025-05-28 19:42:43.818513 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-28 19:42:43.818524 | orchestrator | Wednesday 28 May 2025 19:37:55 +0000 (0:00:00.275) 0:03:33.182 ********* 2025-05-28 19:42:43.818535 | orchestrator | 2025-05-28 19:42:43.818546 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-28 19:42:43.818564 | orchestrator | Wednesday 28 May 2025 19:37:55 +0000 (0:00:00.126) 0:03:33.308 ********* 2025-05-28 19:42:43.818576 | orchestrator | 2025-05-28 19:42:43.818587 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-28 19:42:43.818598 | orchestrator | Wednesday 28 May 2025 19:37:55 +0000 (0:00:00.226) 0:03:33.535 ********* 2025-05-28 19:42:43.818609 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.818620 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.818631 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.818642 | orchestrator | 2025-05-28 19:42:43.818654 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-28 19:42:43.818665 | orchestrator | Wednesday 28 May 2025 19:38:17 +0000 (0:00:21.642) 0:03:55.177 ********* 2025-05-28 19:42:43.818682 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.818693 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.818704 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.818715 | orchestrator | 2025-05-28 19:42:43.818726 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-28 19:42:43.818737 | orchestrator | 2025-05-28 19:42:43.818748 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 19:42:43.818759 | orchestrator | Wednesday 28 May 2025 19:38:26 +0000 (0:00:08.975) 0:04:04.153 ********* 2025-05-28 19:42:43.818770 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:42:43.818781 | orchestrator | 2025-05-28 19:42:43.818793 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 19:42:43.818804 | orchestrator | Wednesday 28 May 2025 19:38:27 +0000 (0:00:01.264) 0:04:05.417 ********* 2025-05-28 19:42:43.818814 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.818825 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.818836 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.818846 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.818855 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.818865 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.818875 | orchestrator | 2025-05-28 19:42:43.818884 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-28 19:42:43.818894 | orchestrator | Wednesday 28 May 2025 19:38:28 +0000 (0:00:00.665) 0:04:06.083 ********* 2025-05-28 19:42:43.818904 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.818914 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.818923 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.818933 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:42:43.818943 | orchestrator | 2025-05-28 19:42:43.818952 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-28 19:42:43.818962 | orchestrator | Wednesday 28 May 2025 19:38:29 +0000 (0:00:01.009) 0:04:07.092 ********* 2025-05-28 19:42:43.818972 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-28 19:42:43.818982 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-28 19:42:43.818996 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-28 19:42:43.819006 | orchestrator | 2025-05-28 19:42:43.819016 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-28 19:42:43.819026 | orchestrator | Wednesday 28 May 2025 19:38:30 +0000 (0:00:00.823) 0:04:07.915 ********* 2025-05-28 19:42:43.819035 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-28 19:42:43.819045 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-28 19:42:43.819055 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-28 19:42:43.819065 | orchestrator | 2025-05-28 19:42:43.819075 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-28 19:42:43.819084 | orchestrator | Wednesday 28 May 2025 19:38:31 +0000 (0:00:01.319) 0:04:09.235 ********* 2025-05-28 19:42:43.819094 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-28 19:42:43.819104 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.819113 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-28 19:42:43.819123 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.819133 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-28 19:42:43.819142 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.819152 | orchestrator | 2025-05-28 19:42:43.819162 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-28 19:42:43.819172 | orchestrator | Wednesday 28 May 2025 19:38:32 +0000 (0:00:00.601) 0:04:09.836 ********* 2025-05-28 19:42:43.819182 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 19:42:43.819197 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 19:42:43.819207 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.819217 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 19:42:43.819227 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 19:42:43.819236 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-28 19:42:43.819246 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-28 19:42:43.819256 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-28 19:42:43.819265 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.819275 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-28 19:42:43.819285 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-28 19:42:43.819295 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.820037 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-28 19:42:43.820058 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-28 19:42:43.820068 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-28 19:42:43.820078 | orchestrator | 2025-05-28 19:42:43.820088 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-28 19:42:43.820097 | orchestrator | Wednesday 28 May 2025 19:38:33 +0000 (0:00:01.220) 0:04:11.056 ********* 2025-05-28 19:42:43.820107 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.820117 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.820127 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.820137 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.820146 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.820156 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.820166 | orchestrator | 2025-05-28 19:42:43.820176 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-28 19:42:43.820185 | orchestrator | Wednesday 28 May 2025 19:38:34 +0000 (0:00:01.190) 0:04:12.246 ********* 2025-05-28 19:42:43.820195 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.820205 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.820215 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.820225 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.820256 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.820267 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.820276 | orchestrator | 2025-05-28 19:42:43.820286 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-28 19:42:43.820296 | orchestrator | Wednesday 28 May 2025 19:38:36 +0000 (0:00:01.836) 0:04:14.083 ********* 2025-05-28 19:42:43.820399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.820458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.820470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.820549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.820605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.820655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.820667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.820687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.820740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.820773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.820785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.820883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.820960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.820973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.820982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.820994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821031 | orchestrator | 2025-05-28 19:42:43.821039 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 19:42:43.821047 | orchestrator | Wednesday 28 May 2025 19:38:38 +0000 (0:00:02.549) 0:04:16.633 ********* 2025-05-28 19:42:43.821055 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-28 19:42:43.821064 | orchestrator | 2025-05-28 19:42:43.821076 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-28 19:42:43.821085 | orchestrator | Wednesday 28 May 2025 19:38:40 +0000 (0:00:01.387) 0:04:18.020 ********* 2025-05-28 19:42:43.821093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821236 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.821256 | orchestrator | 2025-05-28 19:42:43.821264 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-28 19:42:43.821272 | orchestrator | Wednesday 28 May 2025 19:38:44 +0000 (0:00:04.118) 0:04:22.138 ********* 2025-05-28 19:42:43.821281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.821289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.821302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821331 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.821341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.821353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.821362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821370 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.821379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.821397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.821406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821414 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.821423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.821435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821443 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.821451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.821460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821472 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.821485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.821494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821502 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.821510 | orchestrator | 2025-05-28 19:42:43.821518 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-28 19:42:43.821526 | orchestrator | Wednesday 28 May 2025 19:38:46 +0000 (0:00:01.762) 0:04:23.901 ********* 2025-05-28 19:42:43.821535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.821546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.821555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821567 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.821581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.821590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.821598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821606 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.821618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.821626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.821646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821666 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.821686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.821703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.821733 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.821751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821766 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.821779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.821804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.821818 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.821834 | orchestrator | 2025-05-28 19:42:43.821849 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 19:42:43.821864 | orchestrator | Wednesday 28 May 2025 19:38:48 +0000 (0:00:02.331) 0:04:26.232 ********* 2025-05-28 19:42:43.821873 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.821881 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.821889 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.822195 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-28 19:42:43.822212 | orchestrator | 2025-05-28 19:42:43.822220 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-28 19:42:43.822228 | orchestrator | Wednesday 28 May 2025 19:38:49 +0000 (0:00:01.083) 0:04:27.315 ********* 2025-05-28 19:42:43.822236 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 19:42:43.822244 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 19:42:43.822252 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 19:42:43.822260 | orchestrator | 2025-05-28 19:42:43.822268 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-28 19:42:43.822276 | orchestrator | Wednesday 28 May 2025 19:38:50 +0000 (0:00:00.767) 0:04:28.083 ********* 2025-05-28 19:42:43.822284 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 19:42:43.822292 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-28 19:42:43.822300 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-28 19:42:43.822307 | orchestrator | 2025-05-28 19:42:43.822367 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-28 19:42:43.822376 | orchestrator | Wednesday 28 May 2025 19:38:51 +0000 (0:00:00.763) 0:04:28.847 ********* 2025-05-28 19:42:43.822384 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:42:43.822392 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:42:43.822400 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:42:43.822408 | orchestrator | 2025-05-28 19:42:43.822416 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-28 19:42:43.822424 | orchestrator | Wednesday 28 May 2025 19:38:51 +0000 (0:00:00.643) 0:04:29.490 ********* 2025-05-28 19:42:43.822432 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:42:43.822440 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:42:43.822447 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:42:43.822455 | orchestrator | 2025-05-28 19:42:43.822463 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-28 19:42:43.822471 | orchestrator | Wednesday 28 May 2025 19:38:52 +0000 (0:00:00.471) 0:04:29.962 ********* 2025-05-28 19:42:43.822479 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-28 19:42:43.822487 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-28 19:42:43.822495 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-28 19:42:43.822503 | orchestrator | 2025-05-28 19:42:43.822511 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-28 19:42:43.822519 | orchestrator | Wednesday 28 May 2025 19:38:53 +0000 (0:00:01.317) 0:04:31.279 ********* 2025-05-28 19:42:43.822527 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-28 19:42:43.822543 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-28 19:42:43.822551 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-28 19:42:43.822559 | orchestrator | 2025-05-28 19:42:43.822566 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-28 19:42:43.822574 | orchestrator | Wednesday 28 May 2025 19:38:54 +0000 (0:00:01.323) 0:04:32.603 ********* 2025-05-28 19:42:43.822582 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-28 19:42:43.822591 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-28 19:42:43.822605 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-28 19:42:43.822619 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-28 19:42:43.822638 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-28 19:42:43.822654 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-28 19:42:43.822666 | orchestrator | 2025-05-28 19:42:43.822678 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-28 19:42:43.822714 | orchestrator | Wednesday 28 May 2025 19:38:59 +0000 (0:00:05.122) 0:04:37.725 ********* 2025-05-28 19:42:43.822730 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.822739 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.822747 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.822755 | orchestrator | 2025-05-28 19:42:43.822763 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-28 19:42:43.822771 | orchestrator | Wednesday 28 May 2025 19:39:00 +0000 (0:00:00.488) 0:04:38.214 ********* 2025-05-28 19:42:43.822778 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.822786 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.822794 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.822802 | orchestrator | 2025-05-28 19:42:43.822810 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-28 19:42:43.822818 | orchestrator | Wednesday 28 May 2025 19:39:00 +0000 (0:00:00.470) 0:04:38.684 ********* 2025-05-28 19:42:43.822826 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.822834 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.822841 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.822849 | orchestrator | 2025-05-28 19:42:43.822857 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-28 19:42:43.822865 | orchestrator | Wednesday 28 May 2025 19:39:02 +0000 (0:00:01.297) 0:04:39.981 ********* 2025-05-28 19:42:43.822873 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-28 19:42:43.822882 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-28 19:42:43.822890 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-28 19:42:43.822898 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-28 19:42:43.822940 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-28 19:42:43.822950 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-28 19:42:43.822958 | orchestrator | 2025-05-28 19:42:43.822966 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-28 19:42:43.822974 | orchestrator | Wednesday 28 May 2025 19:39:05 +0000 (0:00:03.358) 0:04:43.340 ********* 2025-05-28 19:42:43.822982 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 19:42:43.822990 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 19:42:43.822998 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 19:42:43.823016 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-28 19:42:43.823024 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.823032 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-28 19:42:43.823040 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.823048 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-28 19:42:43.823056 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.823064 | orchestrator | 2025-05-28 19:42:43.823072 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-28 19:42:43.823080 | orchestrator | Wednesday 28 May 2025 19:39:08 +0000 (0:00:03.328) 0:04:46.668 ********* 2025-05-28 19:42:43.823088 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.823096 | orchestrator | 2025-05-28 19:42:43.823104 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-28 19:42:43.823112 | orchestrator | Wednesday 28 May 2025 19:39:08 +0000 (0:00:00.126) 0:04:46.795 ********* 2025-05-28 19:42:43.823120 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.823127 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.823135 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.823143 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.823151 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.823159 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.823167 | orchestrator | 2025-05-28 19:42:43.823175 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-28 19:42:43.823183 | orchestrator | Wednesday 28 May 2025 19:39:09 +0000 (0:00:00.877) 0:04:47.672 ********* 2025-05-28 19:42:43.823191 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-28 19:42:43.823199 | orchestrator | 2025-05-28 19:42:43.823207 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-28 19:42:43.823215 | orchestrator | Wednesday 28 May 2025 19:39:10 +0000 (0:00:00.371) 0:04:48.043 ********* 2025-05-28 19:42:43.823223 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.823231 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.823238 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.823246 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.823254 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.823262 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.823270 | orchestrator | 2025-05-28 19:42:43.823278 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-28 19:42:43.823286 | orchestrator | Wednesday 28 May 2025 19:39:10 +0000 (0:00:00.725) 0:04:48.769 ********* 2025-05-28 19:42:43.823299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.823351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.823393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.823404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.823413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.823425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.823433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.823517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.823569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.823599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.823660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.823711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.823766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.823778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.823978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.823987 | orchestrator | 2025-05-28 19:42:43.823994 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-28 19:42:43.824000 | orchestrator | Wednesday 28 May 2025 19:39:15 +0000 (0:00:04.113) 0:04:52.883 ********* 2025-05-28 19:42:43.824008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.824018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.824029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.824067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.824085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.824098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.824105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.824112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.824173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.824180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.824220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.824227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.824241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.824249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.824273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.824281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.824393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.824432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.824452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.824459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.824540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.824554 | orchestrator | 2025-05-28 19:42:43.824561 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-28 19:42:43.824568 | orchestrator | Wednesday 28 May 2025 19:39:21 +0000 (0:00:06.813) 0:04:59.696 ********* 2025-05-28 19:42:43.824575 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.824581 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.824588 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.824595 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.824601 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.824608 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.824615 | orchestrator | 2025-05-28 19:42:43.824621 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-28 19:42:43.824628 | orchestrator | Wednesday 28 May 2025 19:39:23 +0000 (0:00:01.575) 0:05:01.272 ********* 2025-05-28 19:42:43.824635 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-28 19:42:43.824659 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-28 19:42:43.824667 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.824674 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-28 19:42:43.824680 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-28 19:42:43.824691 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-28 19:42:43.824698 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.824705 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-28 19:42:43.824711 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-28 19:42:43.824718 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.824725 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-28 19:42:43.824732 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-28 19:42:43.824738 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-28 19:42:43.824745 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-28 19:42:43.824752 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-28 19:42:43.824759 | orchestrator | 2025-05-28 19:42:43.824766 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-28 19:42:43.824772 | orchestrator | Wednesday 28 May 2025 19:39:28 +0000 (0:00:05.335) 0:05:06.608 ********* 2025-05-28 19:42:43.824779 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.824786 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.824793 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.824800 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.824806 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.824813 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.824820 | orchestrator | 2025-05-28 19:42:43.824827 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-28 19:42:43.824833 | orchestrator | Wednesday 28 May 2025 19:39:29 +0000 (0:00:00.869) 0:05:07.477 ********* 2025-05-28 19:42:43.824840 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-28 19:42:43.824847 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-28 19:42:43.824854 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-28 19:42:43.824861 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-28 19:42:43.824870 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-28 19:42:43.824877 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-28 19:42:43.824883 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-28 19:42:43.824890 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-28 19:42:43.824897 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.824904 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-28 19:42:43.824911 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.824917 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-28 19:42:43.824924 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-28 19:42:43.824931 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.824938 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-28 19:42:43.824949 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-28 19:42:43.824956 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-28 19:42:43.824963 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-28 19:42:43.824969 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-28 19:42:43.824976 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-28 19:42:43.824983 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-28 19:42:43.824990 | orchestrator | 2025-05-28 19:42:43.824996 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-28 19:42:43.825020 | orchestrator | Wednesday 28 May 2025 19:39:37 +0000 (0:00:07.927) 0:05:15.405 ********* 2025-05-28 19:42:43.825028 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 19:42:43.825035 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 19:42:43.825041 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-28 19:42:43.825048 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-28 19:42:43.825055 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-28 19:42:43.825061 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 19:42:43.825068 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-28 19:42:43.825075 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 19:42:43.825081 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 19:42:43.825088 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 19:42:43.825094 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-28 19:42:43.825101 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-28 19:42:43.825108 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.825114 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-28 19:42:43.825121 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-28 19:42:43.825128 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.825135 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-28 19:42:43.825141 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.825148 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 19:42:43.825155 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 19:42:43.825161 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-28 19:42:43.825168 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 19:42:43.825174 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 19:42:43.825181 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-28 19:42:43.825188 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 19:42:43.825197 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 19:42:43.825208 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-28 19:42:43.825215 | orchestrator | 2025-05-28 19:42:43.825222 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-28 19:42:43.825228 | orchestrator | Wednesday 28 May 2025 19:39:47 +0000 (0:00:09.601) 0:05:25.006 ********* 2025-05-28 19:42:43.825235 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.825242 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.825248 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.825255 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.825262 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.825268 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.825275 | orchestrator | 2025-05-28 19:42:43.825282 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-28 19:42:43.825288 | orchestrator | Wednesday 28 May 2025 19:39:47 +0000 (0:00:00.678) 0:05:25.685 ********* 2025-05-28 19:42:43.825295 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.825302 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.825319 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.825326 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.825333 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.825340 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.825346 | orchestrator | 2025-05-28 19:42:43.825353 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-28 19:42:43.825360 | orchestrator | Wednesday 28 May 2025 19:39:48 +0000 (0:00:00.889) 0:05:26.574 ********* 2025-05-28 19:42:43.825367 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.825373 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.825380 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.825387 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.825393 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.825400 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.825407 | orchestrator | 2025-05-28 19:42:43.825413 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-28 19:42:43.825420 | orchestrator | Wednesday 28 May 2025 19:39:51 +0000 (0:00:03.085) 0:05:29.660 ********* 2025-05-28 19:42:43.825446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.825454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.825462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.825491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825530 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.825537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.825552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.825559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.825598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825625 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.825635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.825642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.825649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.825692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825719 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.825726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.825736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.825747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.825757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.825764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.825779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.825790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.825808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.825850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.825868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.825878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825885 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.825892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825935 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.825945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.825952 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.825959 | orchestrator | 2025-05-28 19:42:43.825966 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-28 19:42:43.825973 | orchestrator | Wednesday 28 May 2025 19:39:53 +0000 (0:00:01.674) 0:05:31.335 ********* 2025-05-28 19:42:43.825979 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-28 19:42:43.825986 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-28 19:42:43.825993 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.826000 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-28 19:42:43.826007 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-28 19:42:43.826030 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.826039 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-28 19:42:43.826045 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-28 19:42:43.826052 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.826059 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-28 19:42:43.826066 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-28 19:42:43.826072 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.826079 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-28 19:42:43.826090 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-28 19:42:43.826097 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.826104 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-28 19:42:43.826110 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-28 19:42:43.826117 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.826124 | orchestrator | 2025-05-28 19:42:43.826131 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-28 19:42:43.826138 | orchestrator | Wednesday 28 May 2025 19:39:54 +0000 (0:00:00.959) 0:05:32.294 ********* 2025-05-28 19:42:43.826149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.826157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.826165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.826183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.826197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-28 19:42:43.826205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-28 19:42:43.826212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.826266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.826303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.826348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.826386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.826408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-28 19:42:43.826437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-28 19:42:43.826448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-28 19:42:43.826569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-28 19:42:43.826590 | orchestrator | 2025-05-28 19:42:43.826597 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-28 19:42:43.826604 | orchestrator | Wednesday 28 May 2025 19:39:57 +0000 (0:00:03.251) 0:05:35.545 ********* 2025-05-28 19:42:43.826611 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.826621 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.826632 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.826647 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.826662 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.826673 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.826684 | orchestrator | 2025-05-28 19:42:43.826693 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 19:42:43.826705 | orchestrator | Wednesday 28 May 2025 19:39:58 +0000 (0:00:00.845) 0:05:36.391 ********* 2025-05-28 19:42:43.826716 | orchestrator | 2025-05-28 19:42:43.826727 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 19:42:43.826739 | orchestrator | Wednesday 28 May 2025 19:39:58 +0000 (0:00:00.109) 0:05:36.500 ********* 2025-05-28 19:42:43.826750 | orchestrator | 2025-05-28 19:42:43.826761 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 19:42:43.826768 | orchestrator | Wednesday 28 May 2025 19:39:58 +0000 (0:00:00.267) 0:05:36.768 ********* 2025-05-28 19:42:43.826775 | orchestrator | 2025-05-28 19:42:43.826782 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 19:42:43.826789 | orchestrator | Wednesday 28 May 2025 19:39:59 +0000 (0:00:00.110) 0:05:36.879 ********* 2025-05-28 19:42:43.826795 | orchestrator | 2025-05-28 19:42:43.826802 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 19:42:43.826809 | orchestrator | Wednesday 28 May 2025 19:39:59 +0000 (0:00:00.290) 0:05:37.169 ********* 2025-05-28 19:42:43.826815 | orchestrator | 2025-05-28 19:42:43.826822 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-28 19:42:43.826829 | orchestrator | Wednesday 28 May 2025 19:39:59 +0000 (0:00:00.115) 0:05:37.284 ********* 2025-05-28 19:42:43.826835 | orchestrator | 2025-05-28 19:42:43.826842 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-28 19:42:43.826849 | orchestrator | Wednesday 28 May 2025 19:39:59 +0000 (0:00:00.265) 0:05:37.549 ********* 2025-05-28 19:42:43.826855 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.826862 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.826869 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.826876 | orchestrator | 2025-05-28 19:42:43.826883 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-28 19:42:43.826890 | orchestrator | Wednesday 28 May 2025 19:40:12 +0000 (0:00:12.900) 0:05:50.450 ********* 2025-05-28 19:42:43.826902 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.826909 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.826915 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.826922 | orchestrator | 2025-05-28 19:42:43.826929 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-28 19:42:43.826936 | orchestrator | Wednesday 28 May 2025 19:40:28 +0000 (0:00:16.300) 0:06:06.751 ********* 2025-05-28 19:42:43.826943 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.826950 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.826956 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.826963 | orchestrator | 2025-05-28 19:42:43.826976 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-28 19:42:43.826983 | orchestrator | Wednesday 28 May 2025 19:40:50 +0000 (0:00:21.204) 0:06:27.955 ********* 2025-05-28 19:42:43.826990 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.826997 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.827004 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.827010 | orchestrator | 2025-05-28 19:42:43.827017 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-28 19:42:43.827024 | orchestrator | Wednesday 28 May 2025 19:41:15 +0000 (0:00:25.774) 0:06:53.729 ********* 2025-05-28 19:42:43.827031 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.827038 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.827044 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.827051 | orchestrator | 2025-05-28 19:42:43.827058 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-28 19:42:43.827065 | orchestrator | Wednesday 28 May 2025 19:41:16 +0000 (0:00:00.727) 0:06:54.456 ********* 2025-05-28 19:42:43.827072 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.827078 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.827085 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.827092 | orchestrator | 2025-05-28 19:42:43.827099 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-28 19:42:43.827106 | orchestrator | Wednesday 28 May 2025 19:41:17 +0000 (0:00:00.914) 0:06:55.371 ********* 2025-05-28 19:42:43.827112 | orchestrator | changed: [testbed-node-5] 2025-05-28 19:42:43.827119 | orchestrator | changed: [testbed-node-3] 2025-05-28 19:42:43.827126 | orchestrator | changed: [testbed-node-4] 2025-05-28 19:42:43.827133 | orchestrator | 2025-05-28 19:42:43.827139 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-28 19:42:43.827146 | orchestrator | Wednesday 28 May 2025 19:41:39 +0000 (0:00:21.450) 0:07:16.821 ********* 2025-05-28 19:42:43.827153 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.827160 | orchestrator | 2025-05-28 19:42:43.827167 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-28 19:42:43.827173 | orchestrator | Wednesday 28 May 2025 19:41:39 +0000 (0:00:00.121) 0:07:16.943 ********* 2025-05-28 19:42:43.827180 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.827187 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.827194 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.827200 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.827207 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.827214 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-28 19:42:43.827225 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:42:43.827232 | orchestrator | 2025-05-28 19:42:43.827239 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-28 19:42:43.827246 | orchestrator | Wednesday 28 May 2025 19:42:00 +0000 (0:00:21.280) 0:07:38.224 ********* 2025-05-28 19:42:43.827253 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.827260 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.827266 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.827273 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.827280 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.827286 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.827293 | orchestrator | 2025-05-28 19:42:43.827300 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-28 19:42:43.827307 | orchestrator | Wednesday 28 May 2025 19:42:09 +0000 (0:00:09.006) 0:07:47.231 ********* 2025-05-28 19:42:43.827334 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.827342 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.827348 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.827355 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.827366 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.827373 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-28 19:42:43.827379 | orchestrator | 2025-05-28 19:42:43.827386 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-28 19:42:43.827393 | orchestrator | Wednesday 28 May 2025 19:42:12 +0000 (0:00:03.055) 0:07:50.286 ********* 2025-05-28 19:42:43.827400 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:42:43.827407 | orchestrator | 2025-05-28 19:42:43.827413 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-28 19:42:43.827420 | orchestrator | Wednesday 28 May 2025 19:42:22 +0000 (0:00:10.064) 0:08:00.350 ********* 2025-05-28 19:42:43.827427 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:42:43.827434 | orchestrator | 2025-05-28 19:42:43.827441 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-28 19:42:43.827447 | orchestrator | Wednesday 28 May 2025 19:42:23 +0000 (0:00:01.105) 0:08:01.456 ********* 2025-05-28 19:42:43.827454 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.827461 | orchestrator | 2025-05-28 19:42:43.827467 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-28 19:42:43.827474 | orchestrator | Wednesday 28 May 2025 19:42:24 +0000 (0:00:01.118) 0:08:02.575 ********* 2025-05-28 19:42:43.827481 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-28 19:42:43.827488 | orchestrator | 2025-05-28 19:42:43.827495 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-28 19:42:43.827505 | orchestrator | Wednesday 28 May 2025 19:42:33 +0000 (0:00:09.114) 0:08:11.689 ********* 2025-05-28 19:42:43.827513 | orchestrator | ok: [testbed-node-3] 2025-05-28 19:42:43.827519 | orchestrator | ok: [testbed-node-4] 2025-05-28 19:42:43.827526 | orchestrator | ok: [testbed-node-5] 2025-05-28 19:42:43.827533 | orchestrator | ok: [testbed-node-0] 2025-05-28 19:42:43.827540 | orchestrator | ok: [testbed-node-1] 2025-05-28 19:42:43.827547 | orchestrator | ok: [testbed-node-2] 2025-05-28 19:42:43.827554 | orchestrator | 2025-05-28 19:42:43.827561 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-28 19:42:43.827567 | orchestrator | 2025-05-28 19:42:43.827574 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-28 19:42:43.827581 | orchestrator | Wednesday 28 May 2025 19:42:35 +0000 (0:00:02.104) 0:08:13.794 ********* 2025-05-28 19:42:43.827588 | orchestrator | changed: [testbed-node-0] 2025-05-28 19:42:43.827595 | orchestrator | changed: [testbed-node-1] 2025-05-28 19:42:43.827601 | orchestrator | changed: [testbed-node-2] 2025-05-28 19:42:43.827608 | orchestrator | 2025-05-28 19:42:43.827615 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-28 19:42:43.827622 | orchestrator | 2025-05-28 19:42:43.827628 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-28 19:42:43.827635 | orchestrator | Wednesday 28 May 2025 19:42:37 +0000 (0:00:01.032) 0:08:14.827 ********* 2025-05-28 19:42:43.827642 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.827649 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.827655 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.827662 | orchestrator | 2025-05-28 19:42:43.827669 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-28 19:42:43.827676 | orchestrator | 2025-05-28 19:42:43.827682 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-28 19:42:43.827689 | orchestrator | Wednesday 28 May 2025 19:42:37 +0000 (0:00:00.718) 0:08:15.546 ********* 2025-05-28 19:42:43.827696 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-28 19:42:43.827703 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-28 19:42:43.827710 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-28 19:42:43.827716 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-28 19:42:43.827727 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-28 19:42:43.827734 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-28 19:42:43.827741 | orchestrator | skipping: [testbed-node-3] 2025-05-28 19:42:43.827747 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-28 19:42:43.827754 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-28 19:42:43.827761 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-28 19:42:43.827768 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-28 19:42:43.827775 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-28 19:42:43.827781 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-28 19:42:43.827788 | orchestrator | skipping: [testbed-node-4] 2025-05-28 19:42:43.827795 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-28 19:42:43.827802 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-28 19:42:43.827813 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-28 19:42:43.827820 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-28 19:42:43.827827 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-28 19:42:43.827834 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-28 19:42:43.827841 | orchestrator | skipping: [testbed-node-5] 2025-05-28 19:42:43.827847 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-28 19:42:43.827854 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-28 19:42:43.827861 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-28 19:42:43.827868 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-28 19:42:43.827875 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-28 19:42:43.827881 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-28 19:42:43.827888 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.827895 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-28 19:42:43.827902 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-28 19:42:43.827908 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-28 19:42:43.827915 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-28 19:42:43.827922 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-28 19:42:43.827929 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-28 19:42:43.827935 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.827942 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-28 19:42:43.827949 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-28 19:42:43.827955 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-28 19:42:43.827962 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-28 19:42:43.827969 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-28 19:42:43.827976 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-28 19:42:43.827983 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.827989 | orchestrator | 2025-05-28 19:42:43.827996 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-28 19:42:43.828003 | orchestrator | 2025-05-28 19:42:43.828010 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-28 19:42:43.828016 | orchestrator | Wednesday 28 May 2025 19:42:39 +0000 (0:00:01.301) 0:08:16.847 ********* 2025-05-28 19:42:43.828023 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-28 19:42:43.828034 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-28 19:42:43.828046 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.828069 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-28 19:42:43.828081 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-28 19:42:43.828092 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.828103 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-28 19:42:43.828114 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-28 19:42:43.828124 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.828135 | orchestrator | 2025-05-28 19:42:43.828146 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-28 19:42:43.828157 | orchestrator | 2025-05-28 19:42:43.828169 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-28 19:42:43.828176 | orchestrator | Wednesday 28 May 2025 19:42:39 +0000 (0:00:00.741) 0:08:17.588 ********* 2025-05-28 19:42:43.828183 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.828190 | orchestrator | 2025-05-28 19:42:43.828196 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-28 19:42:43.828203 | orchestrator | 2025-05-28 19:42:43.828210 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-28 19:42:43.828216 | orchestrator | Wednesday 28 May 2025 19:42:40 +0000 (0:00:00.851) 0:08:18.439 ********* 2025-05-28 19:42:43.828223 | orchestrator | skipping: [testbed-node-0] 2025-05-28 19:42:43.828230 | orchestrator | skipping: [testbed-node-1] 2025-05-28 19:42:43.828236 | orchestrator | skipping: [testbed-node-2] 2025-05-28 19:42:43.828243 | orchestrator | 2025-05-28 19:42:43.828250 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-28 19:42:43.828256 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-28 19:42:43.828263 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-28 19:42:43.828270 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-28 19:42:43.828277 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-28 19:42:43.828284 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-28 19:42:43.828291 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-28 19:42:43.828301 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-28 19:42:43.828348 | orchestrator | 2025-05-28 19:42:43.828358 | orchestrator | 2025-05-28 19:42:43.828364 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-28 19:42:43.828371 | orchestrator | Wednesday 28 May 2025 19:42:41 +0000 (0:00:00.530) 0:08:18.970 ********* 2025-05-28 19:42:43.828378 | orchestrator | =============================================================================== 2025-05-28 19:42:43.828385 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.93s 2025-05-28 19:42:43.828391 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 25.77s 2025-05-28 19:42:43.828398 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.64s 2025-05-28 19:42:43.828405 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.45s 2025-05-28 19:42:43.828411 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.28s 2025-05-28 19:42:43.828418 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.20s 2025-05-28 19:42:43.828424 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.48s 2025-05-28 19:42:43.828436 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.10s 2025-05-28 19:42:43.828443 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.30s 2025-05-28 19:42:43.828450 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.77s 2025-05-28 19:42:43.828456 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.90s 2025-05-28 19:42:43.828463 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.53s 2025-05-28 19:42:43.828470 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.53s 2025-05-28 19:42:43.828476 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.06s 2025-05-28 19:42:43.828483 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.94s 2025-05-28 19:42:43.828489 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.60s 2025-05-28 19:42:43.828496 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.11s 2025-05-28 19:42:43.828502 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.01s 2025-05-28 19:42:43.828509 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.98s 2025-05-28 19:42:43.828516 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.12s 2025-05-28 19:42:43.828527 | orchestrator | 2025-05-28 19:42:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:43.828535 | orchestrator | 2025-05-28 19:42:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:46.864075 | orchestrator | 2025-05-28 19:42:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:46.864177 | orchestrator | 2025-05-28 19:42:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:49.905962 | orchestrator | 2025-05-28 19:42:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:49.906105 | orchestrator | 2025-05-28 19:42:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:52.951900 | orchestrator | 2025-05-28 19:42:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:52.952011 | orchestrator | 2025-05-28 19:42:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:55.998785 | orchestrator | 2025-05-28 19:42:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:55.998885 | orchestrator | 2025-05-28 19:42:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:42:59.046530 | orchestrator | 2025-05-28 19:42:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:42:59.046655 | orchestrator | 2025-05-28 19:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:02.091843 | orchestrator | 2025-05-28 19:43:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:02.091894 | orchestrator | 2025-05-28 19:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:05.129865 | orchestrator | 2025-05-28 19:43:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:05.129952 | orchestrator | 2025-05-28 19:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:08.177999 | orchestrator | 2025-05-28 19:43:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:08.178206 | orchestrator | 2025-05-28 19:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:11.225201 | orchestrator | 2025-05-28 19:43:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:11.225423 | orchestrator | 2025-05-28 19:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:14.273938 | orchestrator | 2025-05-28 19:43:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:14.274109 | orchestrator | 2025-05-28 19:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:17.319690 | orchestrator | 2025-05-28 19:43:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:17.319791 | orchestrator | 2025-05-28 19:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:20.364421 | orchestrator | 2025-05-28 19:43:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:20.364531 | orchestrator | 2025-05-28 19:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:23.414118 | orchestrator | 2025-05-28 19:43:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:23.414183 | orchestrator | 2025-05-28 19:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:26.459951 | orchestrator | 2025-05-28 19:43:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:26.460057 | orchestrator | 2025-05-28 19:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:29.510122 | orchestrator | 2025-05-28 19:43:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:29.510222 | orchestrator | 2025-05-28 19:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:32.560800 | orchestrator | 2025-05-28 19:43:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:32.560927 | orchestrator | 2025-05-28 19:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:35.607378 | orchestrator | 2025-05-28 19:43:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:35.607478 | orchestrator | 2025-05-28 19:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:38.652798 | orchestrator | 2025-05-28 19:43:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:38.652902 | orchestrator | 2025-05-28 19:43:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:41.697428 | orchestrator | 2025-05-28 19:43:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:41.697732 | orchestrator | 2025-05-28 19:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:44.742834 | orchestrator | 2025-05-28 19:43:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:44.742904 | orchestrator | 2025-05-28 19:43:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:47.784893 | orchestrator | 2025-05-28 19:43:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:47.784996 | orchestrator | 2025-05-28 19:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:50.834968 | orchestrator | 2025-05-28 19:43:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:50.835076 | orchestrator | 2025-05-28 19:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:53.885876 | orchestrator | 2025-05-28 19:43:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:53.885990 | orchestrator | 2025-05-28 19:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:56.928098 | orchestrator | 2025-05-28 19:43:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:56.928318 | orchestrator | 2025-05-28 19:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:43:59.975236 | orchestrator | 2025-05-28 19:43:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:43:59.975403 | orchestrator | 2025-05-28 19:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:03.023532 | orchestrator | 2025-05-28 19:44:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:03.023671 | orchestrator | 2025-05-28 19:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:06.071082 | orchestrator | 2025-05-28 19:44:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:06.071193 | orchestrator | 2025-05-28 19:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:09.122426 | orchestrator | 2025-05-28 19:44:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:09.122548 | orchestrator | 2025-05-28 19:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:12.164889 | orchestrator | 2025-05-28 19:44:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:12.165007 | orchestrator | 2025-05-28 19:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:15.205329 | orchestrator | 2025-05-28 19:44:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:15.205439 | orchestrator | 2025-05-28 19:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:18.249778 | orchestrator | 2025-05-28 19:44:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:18.249882 | orchestrator | 2025-05-28 19:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:21.290904 | orchestrator | 2025-05-28 19:44:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:21.291003 | orchestrator | 2025-05-28 19:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:24.329785 | orchestrator | 2025-05-28 19:44:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:24.329878 | orchestrator | 2025-05-28 19:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:27.384143 | orchestrator | 2025-05-28 19:44:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:27.384300 | orchestrator | 2025-05-28 19:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:30.443503 | orchestrator | 2025-05-28 19:44:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:30.443609 | orchestrator | 2025-05-28 19:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:33.478412 | orchestrator | 2025-05-28 19:44:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:33.478524 | orchestrator | 2025-05-28 19:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:36.530488 | orchestrator | 2025-05-28 19:44:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:36.530611 | orchestrator | 2025-05-28 19:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:39.585595 | orchestrator | 2025-05-28 19:44:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:39.585690 | orchestrator | 2025-05-28 19:44:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:42.632472 | orchestrator | 2025-05-28 19:44:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:42.632610 | orchestrator | 2025-05-28 19:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:45.680768 | orchestrator | 2025-05-28 19:44:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:45.680873 | orchestrator | 2025-05-28 19:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:48.726212 | orchestrator | 2025-05-28 19:44:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:48.726383 | orchestrator | 2025-05-28 19:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:51.775845 | orchestrator | 2025-05-28 19:44:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:51.775950 | orchestrator | 2025-05-28 19:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:54.825807 | orchestrator | 2025-05-28 19:44:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:54.825941 | orchestrator | 2025-05-28 19:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:44:57.871679 | orchestrator | 2025-05-28 19:44:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:44:57.871763 | orchestrator | 2025-05-28 19:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:00.915615 | orchestrator | 2025-05-28 19:45:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:00.915725 | orchestrator | 2025-05-28 19:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:03.963797 | orchestrator | 2025-05-28 19:45:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:03.963905 | orchestrator | 2025-05-28 19:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:07.017384 | orchestrator | 2025-05-28 19:45:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:07.017495 | orchestrator | 2025-05-28 19:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:10.067694 | orchestrator | 2025-05-28 19:45:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:10.067795 | orchestrator | 2025-05-28 19:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:13.115786 | orchestrator | 2025-05-28 19:45:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:13.115905 | orchestrator | 2025-05-28 19:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:16.159745 | orchestrator | 2025-05-28 19:45:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:16.159857 | orchestrator | 2025-05-28 19:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:19.205279 | orchestrator | 2025-05-28 19:45:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:19.205371 | orchestrator | 2025-05-28 19:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:22.251371 | orchestrator | 2025-05-28 19:45:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:22.251467 | orchestrator | 2025-05-28 19:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:25.294204 | orchestrator | 2025-05-28 19:45:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:25.294383 | orchestrator | 2025-05-28 19:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:28.346136 | orchestrator | 2025-05-28 19:45:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:28.346350 | orchestrator | 2025-05-28 19:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:31.389478 | orchestrator | 2025-05-28 19:45:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:31.389584 | orchestrator | 2025-05-28 19:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:34.434192 | orchestrator | 2025-05-28 19:45:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:34.434323 | orchestrator | 2025-05-28 19:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:37.480172 | orchestrator | 2025-05-28 19:45:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:37.480277 | orchestrator | 2025-05-28 19:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:40.530669 | orchestrator | 2025-05-28 19:45:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:40.530755 | orchestrator | 2025-05-28 19:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:43.582788 | orchestrator | 2025-05-28 19:45:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:43.582879 | orchestrator | 2025-05-28 19:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:46.625181 | orchestrator | 2025-05-28 19:45:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:46.625308 | orchestrator | 2025-05-28 19:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:49.676540 | orchestrator | 2025-05-28 19:45:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:49.676649 | orchestrator | 2025-05-28 19:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:52.726447 | orchestrator | 2025-05-28 19:45:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:52.726545 | orchestrator | 2025-05-28 19:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:55.775550 | orchestrator | 2025-05-28 19:45:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:55.775660 | orchestrator | 2025-05-28 19:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:45:58.818433 | orchestrator | 2025-05-28 19:45:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:45:58.818523 | orchestrator | 2025-05-28 19:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:01.866862 | orchestrator | 2025-05-28 19:46:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:01.866969 | orchestrator | 2025-05-28 19:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:04.916019 | orchestrator | 2025-05-28 19:46:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:04.916123 | orchestrator | 2025-05-28 19:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:07.959312 | orchestrator | 2025-05-28 19:46:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:07.959438 | orchestrator | 2025-05-28 19:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:11.010421 | orchestrator | 2025-05-28 19:46:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:11.010553 | orchestrator | 2025-05-28 19:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:14.050379 | orchestrator | 2025-05-28 19:46:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:14.050550 | orchestrator | 2025-05-28 19:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:17.095442 | orchestrator | 2025-05-28 19:46:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:17.095545 | orchestrator | 2025-05-28 19:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:20.148584 | orchestrator | 2025-05-28 19:46:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:20.148720 | orchestrator | 2025-05-28 19:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:23.202532 | orchestrator | 2025-05-28 19:46:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:23.202662 | orchestrator | 2025-05-28 19:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:26.252473 | orchestrator | 2025-05-28 19:46:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:26.252611 | orchestrator | 2025-05-28 19:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:29.304886 | orchestrator | 2025-05-28 19:46:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:29.304989 | orchestrator | 2025-05-28 19:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:32.351261 | orchestrator | 2025-05-28 19:46:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:32.351368 | orchestrator | 2025-05-28 19:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:35.401057 | orchestrator | 2025-05-28 19:46:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:35.401149 | orchestrator | 2025-05-28 19:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:38.460294 | orchestrator | 2025-05-28 19:46:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:38.460385 | orchestrator | 2025-05-28 19:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:41.510232 | orchestrator | 2025-05-28 19:46:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:41.510318 | orchestrator | 2025-05-28 19:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:44.555973 | orchestrator | 2025-05-28 19:46:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:44.556082 | orchestrator | 2025-05-28 19:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:47.601137 | orchestrator | 2025-05-28 19:46:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:47.601400 | orchestrator | 2025-05-28 19:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:50.649290 | orchestrator | 2025-05-28 19:46:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:50.649400 | orchestrator | 2025-05-28 19:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:53.697492 | orchestrator | 2025-05-28 19:46:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:53.697586 | orchestrator | 2025-05-28 19:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:56.744068 | orchestrator | 2025-05-28 19:46:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:56.744211 | orchestrator | 2025-05-28 19:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:46:59.789851 | orchestrator | 2025-05-28 19:46:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:46:59.789977 | orchestrator | 2025-05-28 19:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:02.832263 | orchestrator | 2025-05-28 19:47:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:02.832355 | orchestrator | 2025-05-28 19:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:05.879052 | orchestrator | 2025-05-28 19:47:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:05.879269 | orchestrator | 2025-05-28 19:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:08.940704 | orchestrator | 2025-05-28 19:47:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:08.940836 | orchestrator | 2025-05-28 19:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:11.991756 | orchestrator | 2025-05-28 19:47:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:11.991896 | orchestrator | 2025-05-28 19:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:15.040142 | orchestrator | 2025-05-28 19:47:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:15.040289 | orchestrator | 2025-05-28 19:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:18.095753 | orchestrator | 2025-05-28 19:47:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:18.095887 | orchestrator | 2025-05-28 19:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:21.144707 | orchestrator | 2025-05-28 19:47:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:21.144814 | orchestrator | 2025-05-28 19:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:24.191223 | orchestrator | 2025-05-28 19:47:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:24.191330 | orchestrator | 2025-05-28 19:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:27.242299 | orchestrator | 2025-05-28 19:47:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:27.242420 | orchestrator | 2025-05-28 19:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:30.295710 | orchestrator | 2025-05-28 19:47:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:30.295816 | orchestrator | 2025-05-28 19:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:33.348600 | orchestrator | 2025-05-28 19:47:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:33.348701 | orchestrator | 2025-05-28 19:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:36.397545 | orchestrator | 2025-05-28 19:47:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:36.397635 | orchestrator | 2025-05-28 19:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:39.451449 | orchestrator | 2025-05-28 19:47:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:39.451557 | orchestrator | 2025-05-28 19:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:42.499919 | orchestrator | 2025-05-28 19:47:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:42.500022 | orchestrator | 2025-05-28 19:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:45.545293 | orchestrator | 2025-05-28 19:47:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:45.545431 | orchestrator | 2025-05-28 19:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:48.592230 | orchestrator | 2025-05-28 19:47:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:48.592339 | orchestrator | 2025-05-28 19:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:51.644305 | orchestrator | 2025-05-28 19:47:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:51.644412 | orchestrator | 2025-05-28 19:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:54.689045 | orchestrator | 2025-05-28 19:47:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:54.689169 | orchestrator | 2025-05-28 19:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:47:57.741708 | orchestrator | 2025-05-28 19:47:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:47:57.741804 | orchestrator | 2025-05-28 19:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:00.785228 | orchestrator | 2025-05-28 19:48:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:00.785338 | orchestrator | 2025-05-28 19:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:03.838987 | orchestrator | 2025-05-28 19:48:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:03.839076 | orchestrator | 2025-05-28 19:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:06.882245 | orchestrator | 2025-05-28 19:48:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:06.882347 | orchestrator | 2025-05-28 19:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:09.928244 | orchestrator | 2025-05-28 19:48:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:09.928354 | orchestrator | 2025-05-28 19:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:12.979061 | orchestrator | 2025-05-28 19:48:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:12.979189 | orchestrator | 2025-05-28 19:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:16.031089 | orchestrator | 2025-05-28 19:48:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:16.031234 | orchestrator | 2025-05-28 19:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:19.073531 | orchestrator | 2025-05-28 19:48:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:19.073630 | orchestrator | 2025-05-28 19:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:22.111578 | orchestrator | 2025-05-28 19:48:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:22.111672 | orchestrator | 2025-05-28 19:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:25.157194 | orchestrator | 2025-05-28 19:48:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:25.157300 | orchestrator | 2025-05-28 19:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:28.200033 | orchestrator | 2025-05-28 19:48:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:28.200225 | orchestrator | 2025-05-28 19:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:31.253418 | orchestrator | 2025-05-28 19:48:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:31.253550 | orchestrator | 2025-05-28 19:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:34.299424 | orchestrator | 2025-05-28 19:48:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:34.299485 | orchestrator | 2025-05-28 19:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:37.345662 | orchestrator | 2025-05-28 19:48:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:37.345763 | orchestrator | 2025-05-28 19:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:40.394340 | orchestrator | 2025-05-28 19:48:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:40.394449 | orchestrator | 2025-05-28 19:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:43.445153 | orchestrator | 2025-05-28 19:48:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:43.445254 | orchestrator | 2025-05-28 19:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:46.491620 | orchestrator | 2025-05-28 19:48:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:46.491740 | orchestrator | 2025-05-28 19:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:49.536594 | orchestrator | 2025-05-28 19:48:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:49.536707 | orchestrator | 2025-05-28 19:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:52.587100 | orchestrator | 2025-05-28 19:48:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:52.587208 | orchestrator | 2025-05-28 19:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:55.640401 | orchestrator | 2025-05-28 19:48:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:55.640496 | orchestrator | 2025-05-28 19:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:48:58.692720 | orchestrator | 2025-05-28 19:48:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:48:58.692850 | orchestrator | 2025-05-28 19:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:01.744962 | orchestrator | 2025-05-28 19:49:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:01.745068 | orchestrator | 2025-05-28 19:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:04.788301 | orchestrator | 2025-05-28 19:49:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:04.788402 | orchestrator | 2025-05-28 19:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:07.832820 | orchestrator | 2025-05-28 19:49:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:07.832933 | orchestrator | 2025-05-28 19:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:10.883443 | orchestrator | 2025-05-28 19:49:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:10.883535 | orchestrator | 2025-05-28 19:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:13.941373 | orchestrator | 2025-05-28 19:49:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:13.941470 | orchestrator | 2025-05-28 19:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:17.002677 | orchestrator | 2025-05-28 19:49:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:17.002793 | orchestrator | 2025-05-28 19:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:20.050279 | orchestrator | 2025-05-28 19:49:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:20.050405 | orchestrator | 2025-05-28 19:49:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:23.100273 | orchestrator | 2025-05-28 19:49:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:23.100378 | orchestrator | 2025-05-28 19:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:26.145434 | orchestrator | 2025-05-28 19:49:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:26.145540 | orchestrator | 2025-05-28 19:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:29.189905 | orchestrator | 2025-05-28 19:49:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:29.190013 | orchestrator | 2025-05-28 19:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:32.238308 | orchestrator | 2025-05-28 19:49:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:32.238400 | orchestrator | 2025-05-28 19:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:35.283218 | orchestrator | 2025-05-28 19:49:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:35.283318 | orchestrator | 2025-05-28 19:49:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:38.324549 | orchestrator | 2025-05-28 19:49:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:38.324655 | orchestrator | 2025-05-28 19:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:41.367392 | orchestrator | 2025-05-28 19:49:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:41.367505 | orchestrator | 2025-05-28 19:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:44.420993 | orchestrator | 2025-05-28 19:49:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:44.421123 | orchestrator | 2025-05-28 19:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:47.471406 | orchestrator | 2025-05-28 19:49:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:47.471507 | orchestrator | 2025-05-28 19:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:50.521866 | orchestrator | 2025-05-28 19:49:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:50.521962 | orchestrator | 2025-05-28 19:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:53.567059 | orchestrator | 2025-05-28 19:49:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:53.567193 | orchestrator | 2025-05-28 19:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:56.610366 | orchestrator | 2025-05-28 19:49:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:56.610494 | orchestrator | 2025-05-28 19:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:49:59.660795 | orchestrator | 2025-05-28 19:49:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:49:59.660904 | orchestrator | 2025-05-28 19:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:02.716394 | orchestrator | 2025-05-28 19:50:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:02.716544 | orchestrator | 2025-05-28 19:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:05.759074 | orchestrator | 2025-05-28 19:50:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:05.759276 | orchestrator | 2025-05-28 19:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:08.807587 | orchestrator | 2025-05-28 19:50:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:08.807664 | orchestrator | 2025-05-28 19:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:11.855467 | orchestrator | 2025-05-28 19:50:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:11.855554 | orchestrator | 2025-05-28 19:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:14.905395 | orchestrator | 2025-05-28 19:50:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:14.905505 | orchestrator | 2025-05-28 19:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:17.953622 | orchestrator | 2025-05-28 19:50:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:17.953724 | orchestrator | 2025-05-28 19:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:20.997683 | orchestrator | 2025-05-28 19:50:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:20.997795 | orchestrator | 2025-05-28 19:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:24.044301 | orchestrator | 2025-05-28 19:50:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:24.044405 | orchestrator | 2025-05-28 19:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:27.088406 | orchestrator | 2025-05-28 19:50:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:27.088514 | orchestrator | 2025-05-28 19:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:30.134314 | orchestrator | 2025-05-28 19:50:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:30.134422 | orchestrator | 2025-05-28 19:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:33.175967 | orchestrator | 2025-05-28 19:50:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:33.176074 | orchestrator | 2025-05-28 19:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:36.287404 | orchestrator | 2025-05-28 19:50:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:36.287502 | orchestrator | 2025-05-28 19:50:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:39.337521 | orchestrator | 2025-05-28 19:50:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:39.337627 | orchestrator | 2025-05-28 19:50:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:42.381795 | orchestrator | 2025-05-28 19:50:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:42.381908 | orchestrator | 2025-05-28 19:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:45.423799 | orchestrator | 2025-05-28 19:50:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:45.423868 | orchestrator | 2025-05-28 19:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:48.472409 | orchestrator | 2025-05-28 19:50:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:48.472502 | orchestrator | 2025-05-28 19:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:51.523545 | orchestrator | 2025-05-28 19:50:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:51.523634 | orchestrator | 2025-05-28 19:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:54.570465 | orchestrator | 2025-05-28 19:50:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:54.570591 | orchestrator | 2025-05-28 19:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:50:57.614972 | orchestrator | 2025-05-28 19:50:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:50:57.615092 | orchestrator | 2025-05-28 19:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:00.664415 | orchestrator | 2025-05-28 19:51:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:00.664516 | orchestrator | 2025-05-28 19:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:03.707335 | orchestrator | 2025-05-28 19:51:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:03.707448 | orchestrator | 2025-05-28 19:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:06.756254 | orchestrator | 2025-05-28 19:51:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:06.756356 | orchestrator | 2025-05-28 19:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:09.822876 | orchestrator | 2025-05-28 19:51:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:09.822968 | orchestrator | 2025-05-28 19:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:12.876089 | orchestrator | 2025-05-28 19:51:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:12.876244 | orchestrator | 2025-05-28 19:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:15.921792 | orchestrator | 2025-05-28 19:51:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:15.921901 | orchestrator | 2025-05-28 19:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:18.976928 | orchestrator | 2025-05-28 19:51:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:18.977058 | orchestrator | 2025-05-28 19:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:22.023216 | orchestrator | 2025-05-28 19:51:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:22.023331 | orchestrator | 2025-05-28 19:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:25.072594 | orchestrator | 2025-05-28 19:51:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:25.072681 | orchestrator | 2025-05-28 19:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:28.117074 | orchestrator | 2025-05-28 19:51:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:28.117171 | orchestrator | 2025-05-28 19:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:31.153239 | orchestrator | 2025-05-28 19:51:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:31.153337 | orchestrator | 2025-05-28 19:51:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:34.194482 | orchestrator | 2025-05-28 19:51:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:34.194594 | orchestrator | 2025-05-28 19:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:37.247809 | orchestrator | 2025-05-28 19:51:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:37.247911 | orchestrator | 2025-05-28 19:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:40.293324 | orchestrator | 2025-05-28 19:51:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:40.293430 | orchestrator | 2025-05-28 19:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:43.347825 | orchestrator | 2025-05-28 19:51:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:43.347932 | orchestrator | 2025-05-28 19:51:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:46.401201 | orchestrator | 2025-05-28 19:51:46 | INFO  | Task 91dc7802-b95d-4b82-8965-f2e1384488ed is in state STARTED 2025-05-28 19:51:46.402653 | orchestrator | 2025-05-28 19:51:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:46.402686 | orchestrator | 2025-05-28 19:51:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:49.453985 | orchestrator | 2025-05-28 19:51:49 | INFO  | Task 91dc7802-b95d-4b82-8965-f2e1384488ed is in state STARTED 2025-05-28 19:51:49.454920 | orchestrator | 2025-05-28 19:51:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:49.454965 | orchestrator | 2025-05-28 19:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:52.510463 | orchestrator | 2025-05-28 19:51:52 | INFO  | Task 91dc7802-b95d-4b82-8965-f2e1384488ed is in state STARTED 2025-05-28 19:51:52.514185 | orchestrator | 2025-05-28 19:51:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:52.514231 | orchestrator | 2025-05-28 19:51:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:55.574081 | orchestrator | 2025-05-28 19:51:55 | INFO  | Task 91dc7802-b95d-4b82-8965-f2e1384488ed is in state STARTED 2025-05-28 19:51:55.574435 | orchestrator | 2025-05-28 19:51:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:55.575176 | orchestrator | 2025-05-28 19:51:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:51:58.615839 | orchestrator | 2025-05-28 19:51:58 | INFO  | Task 91dc7802-b95d-4b82-8965-f2e1384488ed is in state SUCCESS 2025-05-28 19:51:58.617014 | orchestrator | 2025-05-28 19:51:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:51:58.617075 | orchestrator | 2025-05-28 19:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:01.668605 | orchestrator | 2025-05-28 19:52:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:01.668703 | orchestrator | 2025-05-28 19:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:04.720624 | orchestrator | 2025-05-28 19:52:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:04.720739 | orchestrator | 2025-05-28 19:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:07.766599 | orchestrator | 2025-05-28 19:52:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:07.766685 | orchestrator | 2025-05-28 19:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:10.813345 | orchestrator | 2025-05-28 19:52:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:10.813481 | orchestrator | 2025-05-28 19:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:13.851110 | orchestrator | 2025-05-28 19:52:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:13.851249 | orchestrator | 2025-05-28 19:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:16.897843 | orchestrator | 2025-05-28 19:52:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:16.897948 | orchestrator | 2025-05-28 19:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:19.941766 | orchestrator | 2025-05-28 19:52:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:19.941869 | orchestrator | 2025-05-28 19:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:22.989924 | orchestrator | 2025-05-28 19:52:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:22.990101 | orchestrator | 2025-05-28 19:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:26.053520 | orchestrator | 2025-05-28 19:52:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:26.053625 | orchestrator | 2025-05-28 19:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:29.102903 | orchestrator | 2025-05-28 19:52:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:29.103013 | orchestrator | 2025-05-28 19:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:32.139665 | orchestrator | 2025-05-28 19:52:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:32.139744 | orchestrator | 2025-05-28 19:52:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:35.179959 | orchestrator | 2025-05-28 19:52:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:35.180069 | orchestrator | 2025-05-28 19:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:38.251968 | orchestrator | 2025-05-28 19:52:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:38.252068 | orchestrator | 2025-05-28 19:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:41.291592 | orchestrator | 2025-05-28 19:52:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:41.291698 | orchestrator | 2025-05-28 19:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:44.341607 | orchestrator | 2025-05-28 19:52:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:44.341710 | orchestrator | 2025-05-28 19:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:47.385346 | orchestrator | 2025-05-28 19:52:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:47.385438 | orchestrator | 2025-05-28 19:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:50.434656 | orchestrator | 2025-05-28 19:52:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:50.434754 | orchestrator | 2025-05-28 19:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:53.481319 | orchestrator | 2025-05-28 19:52:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:53.481408 | orchestrator | 2025-05-28 19:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:56.541919 | orchestrator | 2025-05-28 19:52:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:56.542110 | orchestrator | 2025-05-28 19:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:52:59.590097 | orchestrator | 2025-05-28 19:52:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:52:59.590254 | orchestrator | 2025-05-28 19:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:02.638961 | orchestrator | 2025-05-28 19:53:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:02.639084 | orchestrator | 2025-05-28 19:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:05.704550 | orchestrator | 2025-05-28 19:53:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:05.704644 | orchestrator | 2025-05-28 19:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:08.744366 | orchestrator | 2025-05-28 19:53:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:08.744469 | orchestrator | 2025-05-28 19:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:11.791591 | orchestrator | 2025-05-28 19:53:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:11.791696 | orchestrator | 2025-05-28 19:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:14.841361 | orchestrator | 2025-05-28 19:53:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:14.841448 | orchestrator | 2025-05-28 19:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:17.887833 | orchestrator | 2025-05-28 19:53:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:17.887959 | orchestrator | 2025-05-28 19:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:20.930819 | orchestrator | 2025-05-28 19:53:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:20.930891 | orchestrator | 2025-05-28 19:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:23.987446 | orchestrator | 2025-05-28 19:53:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:23.987514 | orchestrator | 2025-05-28 19:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:27.040606 | orchestrator | 2025-05-28 19:53:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:27.040712 | orchestrator | 2025-05-28 19:53:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:30.085827 | orchestrator | 2025-05-28 19:53:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:30.085937 | orchestrator | 2025-05-28 19:53:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:33.132906 | orchestrator | 2025-05-28 19:53:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:33.133005 | orchestrator | 2025-05-28 19:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:36.174174 | orchestrator | 2025-05-28 19:53:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:36.174275 | orchestrator | 2025-05-28 19:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:39.218483 | orchestrator | 2025-05-28 19:53:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:39.218586 | orchestrator | 2025-05-28 19:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:42.282468 | orchestrator | 2025-05-28 19:53:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:42.282605 | orchestrator | 2025-05-28 19:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:45.340271 | orchestrator | 2025-05-28 19:53:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:45.340359 | orchestrator | 2025-05-28 19:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:48.386651 | orchestrator | 2025-05-28 19:53:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:48.386759 | orchestrator | 2025-05-28 19:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:51.433914 | orchestrator | 2025-05-28 19:53:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:51.434002 | orchestrator | 2025-05-28 19:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:54.475362 | orchestrator | 2025-05-28 19:53:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:54.475469 | orchestrator | 2025-05-28 19:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:53:57.526543 | orchestrator | 2025-05-28 19:53:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:53:57.526652 | orchestrator | 2025-05-28 19:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:00.574751 | orchestrator | 2025-05-28 19:54:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:00.574823 | orchestrator | 2025-05-28 19:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:03.627555 | orchestrator | 2025-05-28 19:54:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:03.627659 | orchestrator | 2025-05-28 19:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:06.666625 | orchestrator | 2025-05-28 19:54:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:06.666687 | orchestrator | 2025-05-28 19:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:09.710570 | orchestrator | 2025-05-28 19:54:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:09.710673 | orchestrator | 2025-05-28 19:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:12.761693 | orchestrator | 2025-05-28 19:54:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:12.761813 | orchestrator | 2025-05-28 19:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:15.816574 | orchestrator | 2025-05-28 19:54:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:15.816682 | orchestrator | 2025-05-28 19:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:18.857927 | orchestrator | 2025-05-28 19:54:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:18.858124 | orchestrator | 2025-05-28 19:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:21.914247 | orchestrator | 2025-05-28 19:54:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:21.914349 | orchestrator | 2025-05-28 19:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:24.966277 | orchestrator | 2025-05-28 19:54:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:24.966378 | orchestrator | 2025-05-28 19:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:28.011574 | orchestrator | 2025-05-28 19:54:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:28.011680 | orchestrator | 2025-05-28 19:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:31.057441 | orchestrator | 2025-05-28 19:54:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:31.057540 | orchestrator | 2025-05-28 19:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:34.104204 | orchestrator | 2025-05-28 19:54:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:34.104289 | orchestrator | 2025-05-28 19:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:37.147760 | orchestrator | 2025-05-28 19:54:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:37.147887 | orchestrator | 2025-05-28 19:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:40.190803 | orchestrator | 2025-05-28 19:54:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:40.190908 | orchestrator | 2025-05-28 19:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:43.239666 | orchestrator | 2025-05-28 19:54:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:43.239756 | orchestrator | 2025-05-28 19:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:46.286892 | orchestrator | 2025-05-28 19:54:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:46.286999 | orchestrator | 2025-05-28 19:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:49.333058 | orchestrator | 2025-05-28 19:54:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:49.333250 | orchestrator | 2025-05-28 19:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:52.387058 | orchestrator | 2025-05-28 19:54:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:52.387219 | orchestrator | 2025-05-28 19:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:55.431857 | orchestrator | 2025-05-28 19:54:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:55.431973 | orchestrator | 2025-05-28 19:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:54:58.481494 | orchestrator | 2025-05-28 19:54:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:54:58.481601 | orchestrator | 2025-05-28 19:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:01.535139 | orchestrator | 2025-05-28 19:55:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:01.535230 | orchestrator | 2025-05-28 19:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:04.585673 | orchestrator | 2025-05-28 19:55:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:04.585745 | orchestrator | 2025-05-28 19:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:07.628567 | orchestrator | 2025-05-28 19:55:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:07.629358 | orchestrator | 2025-05-28 19:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:10.674150 | orchestrator | 2025-05-28 19:55:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:10.674256 | orchestrator | 2025-05-28 19:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:13.726715 | orchestrator | 2025-05-28 19:55:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:13.726850 | orchestrator | 2025-05-28 19:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:16.774097 | orchestrator | 2025-05-28 19:55:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:16.774200 | orchestrator | 2025-05-28 19:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:19.822726 | orchestrator | 2025-05-28 19:55:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:19.822821 | orchestrator | 2025-05-28 19:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:22.874138 | orchestrator | 2025-05-28 19:55:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:22.874223 | orchestrator | 2025-05-28 19:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:25.921332 | orchestrator | 2025-05-28 19:55:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:25.921425 | orchestrator | 2025-05-28 19:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:28.965326 | orchestrator | 2025-05-28 19:55:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:28.965429 | orchestrator | 2025-05-28 19:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:32.016769 | orchestrator | 2025-05-28 19:55:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:32.016877 | orchestrator | 2025-05-28 19:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:35.057245 | orchestrator | 2025-05-28 19:55:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:35.057368 | orchestrator | 2025-05-28 19:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:38.098788 | orchestrator | 2025-05-28 19:55:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:38.098959 | orchestrator | 2025-05-28 19:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:41.147585 | orchestrator | 2025-05-28 19:55:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:41.147695 | orchestrator | 2025-05-28 19:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:44.198617 | orchestrator | 2025-05-28 19:55:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:44.198723 | orchestrator | 2025-05-28 19:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:47.247545 | orchestrator | 2025-05-28 19:55:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:47.247620 | orchestrator | 2025-05-28 19:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:50.296671 | orchestrator | 2025-05-28 19:55:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:50.296783 | orchestrator | 2025-05-28 19:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:53.346519 | orchestrator | 2025-05-28 19:55:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:53.346622 | orchestrator | 2025-05-28 19:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:56.389536 | orchestrator | 2025-05-28 19:55:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:56.389645 | orchestrator | 2025-05-28 19:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:55:59.435705 | orchestrator | 2025-05-28 19:55:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:55:59.435796 | orchestrator | 2025-05-28 19:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:02.486463 | orchestrator | 2025-05-28 19:56:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:02.486557 | orchestrator | 2025-05-28 19:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:05.537359 | orchestrator | 2025-05-28 19:56:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:05.537460 | orchestrator | 2025-05-28 19:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:08.578506 | orchestrator | 2025-05-28 19:56:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:08.578630 | orchestrator | 2025-05-28 19:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:11.626567 | orchestrator | 2025-05-28 19:56:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:11.626670 | orchestrator | 2025-05-28 19:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:14.677315 | orchestrator | 2025-05-28 19:56:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:14.677442 | orchestrator | 2025-05-28 19:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:17.720942 | orchestrator | 2025-05-28 19:56:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:17.721087 | orchestrator | 2025-05-28 19:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:20.772290 | orchestrator | 2025-05-28 19:56:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:20.772384 | orchestrator | 2025-05-28 19:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:23.813950 | orchestrator | 2025-05-28 19:56:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:23.814276 | orchestrator | 2025-05-28 19:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:26.861168 | orchestrator | 2025-05-28 19:56:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:26.861288 | orchestrator | 2025-05-28 19:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:29.915980 | orchestrator | 2025-05-28 19:56:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:29.916142 | orchestrator | 2025-05-28 19:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:32.969231 | orchestrator | 2025-05-28 19:56:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:32.969318 | orchestrator | 2025-05-28 19:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:36.025133 | orchestrator | 2025-05-28 19:56:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:36.025238 | orchestrator | 2025-05-28 19:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:39.090492 | orchestrator | 2025-05-28 19:56:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:39.090548 | orchestrator | 2025-05-28 19:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:42.140953 | orchestrator | 2025-05-28 19:56:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:42.141103 | orchestrator | 2025-05-28 19:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:45.190597 | orchestrator | 2025-05-28 19:56:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:45.190707 | orchestrator | 2025-05-28 19:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:48.251865 | orchestrator | 2025-05-28 19:56:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:48.251959 | orchestrator | 2025-05-28 19:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:51.303499 | orchestrator | 2025-05-28 19:56:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:51.303617 | orchestrator | 2025-05-28 19:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:54.345455 | orchestrator | 2025-05-28 19:56:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:54.345551 | orchestrator | 2025-05-28 19:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:56:57.399467 | orchestrator | 2025-05-28 19:56:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:56:57.399597 | orchestrator | 2025-05-28 19:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:00.450821 | orchestrator | 2025-05-28 19:57:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:00.450913 | orchestrator | 2025-05-28 19:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:03.504536 | orchestrator | 2025-05-28 19:57:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:03.504669 | orchestrator | 2025-05-28 19:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:06.546475 | orchestrator | 2025-05-28 19:57:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:06.546579 | orchestrator | 2025-05-28 19:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:09.597715 | orchestrator | 2025-05-28 19:57:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:09.597801 | orchestrator | 2025-05-28 19:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:12.649849 | orchestrator | 2025-05-28 19:57:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:12.649951 | orchestrator | 2025-05-28 19:57:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:15.699794 | orchestrator | 2025-05-28 19:57:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:15.699932 | orchestrator | 2025-05-28 19:57:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:18.756394 | orchestrator | 2025-05-28 19:57:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:18.756453 | orchestrator | 2025-05-28 19:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:21.802471 | orchestrator | 2025-05-28 19:57:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:21.802609 | orchestrator | 2025-05-28 19:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:24.840305 | orchestrator | 2025-05-28 19:57:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:24.840439 | orchestrator | 2025-05-28 19:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:27.890506 | orchestrator | 2025-05-28 19:57:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:27.890645 | orchestrator | 2025-05-28 19:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:30.937084 | orchestrator | 2025-05-28 19:57:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:30.937249 | orchestrator | 2025-05-28 19:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:33.988640 | orchestrator | 2025-05-28 19:57:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:33.988775 | orchestrator | 2025-05-28 19:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:37.038976 | orchestrator | 2025-05-28 19:57:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:37.039117 | orchestrator | 2025-05-28 19:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:40.074850 | orchestrator | 2025-05-28 19:57:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:40.074961 | orchestrator | 2025-05-28 19:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:43.117079 | orchestrator | 2025-05-28 19:57:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:43.117215 | orchestrator | 2025-05-28 19:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:46.158829 | orchestrator | 2025-05-28 19:57:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:46.158965 | orchestrator | 2025-05-28 19:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:49.201084 | orchestrator | 2025-05-28 19:57:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:49.201156 | orchestrator | 2025-05-28 19:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:52.251856 | orchestrator | 2025-05-28 19:57:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:52.251979 | orchestrator | 2025-05-28 19:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:55.297541 | orchestrator | 2025-05-28 19:57:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:55.297627 | orchestrator | 2025-05-28 19:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:57:58.342249 | orchestrator | 2025-05-28 19:57:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:57:58.342345 | orchestrator | 2025-05-28 19:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:01.395130 | orchestrator | 2025-05-28 19:58:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:01.395203 | orchestrator | 2025-05-28 19:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:04.441413 | orchestrator | 2025-05-28 19:58:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:04.441521 | orchestrator | 2025-05-28 19:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:07.497415 | orchestrator | 2025-05-28 19:58:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:07.497517 | orchestrator | 2025-05-28 19:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:10.544466 | orchestrator | 2025-05-28 19:58:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:10.544572 | orchestrator | 2025-05-28 19:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:13.592571 | orchestrator | 2025-05-28 19:58:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:13.592676 | orchestrator | 2025-05-28 19:58:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:16.633733 | orchestrator | 2025-05-28 19:58:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:16.633873 | orchestrator | 2025-05-28 19:58:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:19.690666 | orchestrator | 2025-05-28 19:58:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:19.690742 | orchestrator | 2025-05-28 19:58:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:22.746405 | orchestrator | 2025-05-28 19:58:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:22.746515 | orchestrator | 2025-05-28 19:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:25.798594 | orchestrator | 2025-05-28 19:58:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:25.798717 | orchestrator | 2025-05-28 19:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:28.849887 | orchestrator | 2025-05-28 19:58:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:28.850108 | orchestrator | 2025-05-28 19:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:31.908314 | orchestrator | 2025-05-28 19:58:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:31.908420 | orchestrator | 2025-05-28 19:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:34.960390 | orchestrator | 2025-05-28 19:58:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:34.960484 | orchestrator | 2025-05-28 19:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:38.005715 | orchestrator | 2025-05-28 19:58:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:38.005810 | orchestrator | 2025-05-28 19:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:41.056860 | orchestrator | 2025-05-28 19:58:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:41.057071 | orchestrator | 2025-05-28 19:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:44.105107 | orchestrator | 2025-05-28 19:58:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:44.105192 | orchestrator | 2025-05-28 19:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:47.144872 | orchestrator | 2025-05-28 19:58:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:47.145044 | orchestrator | 2025-05-28 19:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:50.191252 | orchestrator | 2025-05-28 19:58:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:50.191354 | orchestrator | 2025-05-28 19:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:53.236202 | orchestrator | 2025-05-28 19:58:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:53.236314 | orchestrator | 2025-05-28 19:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:56.274574 | orchestrator | 2025-05-28 19:58:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:56.274661 | orchestrator | 2025-05-28 19:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:58:59.322364 | orchestrator | 2025-05-28 19:58:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:58:59.322472 | orchestrator | 2025-05-28 19:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:02.365192 | orchestrator | 2025-05-28 19:59:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:02.365299 | orchestrator | 2025-05-28 19:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:05.408243 | orchestrator | 2025-05-28 19:59:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:05.408317 | orchestrator | 2025-05-28 19:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:08.453559 | orchestrator | 2025-05-28 19:59:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:08.453680 | orchestrator | 2025-05-28 19:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:11.505493 | orchestrator | 2025-05-28 19:59:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:11.505592 | orchestrator | 2025-05-28 19:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:14.548651 | orchestrator | 2025-05-28 19:59:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:14.548748 | orchestrator | 2025-05-28 19:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:17.599603 | orchestrator | 2025-05-28 19:59:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:17.599768 | orchestrator | 2025-05-28 19:59:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:20.640379 | orchestrator | 2025-05-28 19:59:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:20.640463 | orchestrator | 2025-05-28 19:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:23.694328 | orchestrator | 2025-05-28 19:59:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:23.694441 | orchestrator | 2025-05-28 19:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:26.748001 | orchestrator | 2025-05-28 19:59:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:26.748172 | orchestrator | 2025-05-28 19:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:29.792999 | orchestrator | 2025-05-28 19:59:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:29.793158 | orchestrator | 2025-05-28 19:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:32.839952 | orchestrator | 2025-05-28 19:59:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:32.840118 | orchestrator | 2025-05-28 19:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:35.887663 | orchestrator | 2025-05-28 19:59:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:35.887763 | orchestrator | 2025-05-28 19:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:38.942946 | orchestrator | 2025-05-28 19:59:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:38.943094 | orchestrator | 2025-05-28 19:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:41.993152 | orchestrator | 2025-05-28 19:59:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:41.993333 | orchestrator | 2025-05-28 19:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:45.034743 | orchestrator | 2025-05-28 19:59:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:45.034841 | orchestrator | 2025-05-28 19:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:48.081770 | orchestrator | 2025-05-28 19:59:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:48.081879 | orchestrator | 2025-05-28 19:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:51.121769 | orchestrator | 2025-05-28 19:59:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:51.121870 | orchestrator | 2025-05-28 19:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:54.158462 | orchestrator | 2025-05-28 19:59:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:54.158561 | orchestrator | 2025-05-28 19:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 19:59:57.210730 | orchestrator | 2025-05-28 19:59:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 19:59:57.210836 | orchestrator | 2025-05-28 19:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:00.259818 | orchestrator | 2025-05-28 20:00:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:00.259892 | orchestrator | 2025-05-28 20:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:03.309015 | orchestrator | 2025-05-28 20:00:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:03.309160 | orchestrator | 2025-05-28 20:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:06.358691 | orchestrator | 2025-05-28 20:00:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:06.358815 | orchestrator | 2025-05-28 20:00:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:09.407860 | orchestrator | 2025-05-28 20:00:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:09.407950 | orchestrator | 2025-05-28 20:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:12.453409 | orchestrator | 2025-05-28 20:00:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:12.453502 | orchestrator | 2025-05-28 20:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:15.513345 | orchestrator | 2025-05-28 20:00:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:15.513441 | orchestrator | 2025-05-28 20:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:18.562255 | orchestrator | 2025-05-28 20:00:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:18.562363 | orchestrator | 2025-05-28 20:00:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:21.607734 | orchestrator | 2025-05-28 20:00:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:21.607812 | orchestrator | 2025-05-28 20:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:24.654132 | orchestrator | 2025-05-28 20:00:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:24.654281 | orchestrator | 2025-05-28 20:00:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:27.706366 | orchestrator | 2025-05-28 20:00:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:27.706470 | orchestrator | 2025-05-28 20:00:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:30.757481 | orchestrator | 2025-05-28 20:00:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:30.757573 | orchestrator | 2025-05-28 20:00:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:33.802332 | orchestrator | 2025-05-28 20:00:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:33.802420 | orchestrator | 2025-05-28 20:00:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:36.848771 | orchestrator | 2025-05-28 20:00:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:36.848869 | orchestrator | 2025-05-28 20:00:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:39.899044 | orchestrator | 2025-05-28 20:00:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:39.899184 | orchestrator | 2025-05-28 20:00:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:42.952812 | orchestrator | 2025-05-28 20:00:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:42.952919 | orchestrator | 2025-05-28 20:00:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:46.003358 | orchestrator | 2025-05-28 20:00:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:46.003448 | orchestrator | 2025-05-28 20:00:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:49.047528 | orchestrator | 2025-05-28 20:00:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:49.047626 | orchestrator | 2025-05-28 20:00:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:52.092107 | orchestrator | 2025-05-28 20:00:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:52.092227 | orchestrator | 2025-05-28 20:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:55.146931 | orchestrator | 2025-05-28 20:00:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:55.147014 | orchestrator | 2025-05-28 20:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:00:58.205362 | orchestrator | 2025-05-28 20:00:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:00:58.205471 | orchestrator | 2025-05-28 20:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:01.258662 | orchestrator | 2025-05-28 20:01:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:01.258764 | orchestrator | 2025-05-28 20:01:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:04.310298 | orchestrator | 2025-05-28 20:01:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:04.310405 | orchestrator | 2025-05-28 20:01:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:07.359881 | orchestrator | 2025-05-28 20:01:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:07.359990 | orchestrator | 2025-05-28 20:01:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:10.401589 | orchestrator | 2025-05-28 20:01:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:10.401687 | orchestrator | 2025-05-28 20:01:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:13.454247 | orchestrator | 2025-05-28 20:01:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:13.454366 | orchestrator | 2025-05-28 20:01:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:16.513053 | orchestrator | 2025-05-28 20:01:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:16.513242 | orchestrator | 2025-05-28 20:01:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:19.559218 | orchestrator | 2025-05-28 20:01:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:19.559331 | orchestrator | 2025-05-28 20:01:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:22.609900 | orchestrator | 2025-05-28 20:01:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:22.610135 | orchestrator | 2025-05-28 20:01:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:25.654502 | orchestrator | 2025-05-28 20:01:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:25.654627 | orchestrator | 2025-05-28 20:01:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:28.706231 | orchestrator | 2025-05-28 20:01:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:28.706341 | orchestrator | 2025-05-28 20:01:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:31.753559 | orchestrator | 2025-05-28 20:01:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:31.753644 | orchestrator | 2025-05-28 20:01:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:34.800918 | orchestrator | 2025-05-28 20:01:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:34.801048 | orchestrator | 2025-05-28 20:01:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:37.851383 | orchestrator | 2025-05-28 20:01:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:37.851502 | orchestrator | 2025-05-28 20:01:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:40.903962 | orchestrator | 2025-05-28 20:01:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:40.904067 | orchestrator | 2025-05-28 20:01:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:43.956395 | orchestrator | 2025-05-28 20:01:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:43.956471 | orchestrator | 2025-05-28 20:01:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:47.009470 | orchestrator | 2025-05-28 20:01:47 | INFO  | Task ad689f47-7972-416e-bbe3-4f6798773028 is in state STARTED 2025-05-28 20:01:47.011061 | orchestrator | 2025-05-28 20:01:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:47.011112 | orchestrator | 2025-05-28 20:01:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:50.067065 | orchestrator | 2025-05-28 20:01:50 | INFO  | Task ad689f47-7972-416e-bbe3-4f6798773028 is in state STARTED 2025-05-28 20:01:50.068715 | orchestrator | 2025-05-28 20:01:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:50.068796 | orchestrator | 2025-05-28 20:01:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:53.122532 | orchestrator | 2025-05-28 20:01:53 | INFO  | Task ad689f47-7972-416e-bbe3-4f6798773028 is in state STARTED 2025-05-28 20:01:53.122906 | orchestrator | 2025-05-28 20:01:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:53.123017 | orchestrator | 2025-05-28 20:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:56.166269 | orchestrator | 2025-05-28 20:01:56 | INFO  | Task ad689f47-7972-416e-bbe3-4f6798773028 is in state SUCCESS 2025-05-28 20:01:56.167354 | orchestrator | 2025-05-28 20:01:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:56.167430 | orchestrator | 2025-05-28 20:01:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:01:59.213792 | orchestrator | 2025-05-28 20:01:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:01:59.213904 | orchestrator | 2025-05-28 20:01:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:02.262659 | orchestrator | 2025-05-28 20:02:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:02.262759 | orchestrator | 2025-05-28 20:02:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:05.307808 | orchestrator | 2025-05-28 20:02:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:05.307927 | orchestrator | 2025-05-28 20:02:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:08.354802 | orchestrator | 2025-05-28 20:02:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:08.354894 | orchestrator | 2025-05-28 20:02:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:11.401863 | orchestrator | 2025-05-28 20:02:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:11.401956 | orchestrator | 2025-05-28 20:02:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:14.449019 | orchestrator | 2025-05-28 20:02:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:14.449101 | orchestrator | 2025-05-28 20:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:17.495496 | orchestrator | 2025-05-28 20:02:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:17.495607 | orchestrator | 2025-05-28 20:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:20.544456 | orchestrator | 2025-05-28 20:02:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:20.544556 | orchestrator | 2025-05-28 20:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:23.594094 | orchestrator | 2025-05-28 20:02:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:23.594269 | orchestrator | 2025-05-28 20:02:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:26.642378 | orchestrator | 2025-05-28 20:02:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:26.642453 | orchestrator | 2025-05-28 20:02:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:29.696728 | orchestrator | 2025-05-28 20:02:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:29.697535 | orchestrator | 2025-05-28 20:02:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:32.744996 | orchestrator | 2025-05-28 20:02:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:32.745100 | orchestrator | 2025-05-28 20:02:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:35.796149 | orchestrator | 2025-05-28 20:02:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:35.797827 | orchestrator | 2025-05-28 20:02:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:38.840353 | orchestrator | 2025-05-28 20:02:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:38.840458 | orchestrator | 2025-05-28 20:02:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:41.888893 | orchestrator | 2025-05-28 20:02:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:41.889038 | orchestrator | 2025-05-28 20:02:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:44.935975 | orchestrator | 2025-05-28 20:02:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:44.936077 | orchestrator | 2025-05-28 20:02:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:47.982936 | orchestrator | 2025-05-28 20:02:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:47.983036 | orchestrator | 2025-05-28 20:02:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:51.035500 | orchestrator | 2025-05-28 20:02:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:51.035575 | orchestrator | 2025-05-28 20:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:54.071925 | orchestrator | 2025-05-28 20:02:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:54.072091 | orchestrator | 2025-05-28 20:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:02:57.116519 | orchestrator | 2025-05-28 20:02:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:02:57.116618 | orchestrator | 2025-05-28 20:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:00.168670 | orchestrator | 2025-05-28 20:03:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:00.168783 | orchestrator | 2025-05-28 20:03:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:03.214011 | orchestrator | 2025-05-28 20:03:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:03.214179 | orchestrator | 2025-05-28 20:03:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:06.261861 | orchestrator | 2025-05-28 20:03:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:06.261947 | orchestrator | 2025-05-28 20:03:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:09.317911 | orchestrator | 2025-05-28 20:03:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:09.318131 | orchestrator | 2025-05-28 20:03:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:12.365860 | orchestrator | 2025-05-28 20:03:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:12.365948 | orchestrator | 2025-05-28 20:03:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:15.416131 | orchestrator | 2025-05-28 20:03:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:15.416348 | orchestrator | 2025-05-28 20:03:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:18.467184 | orchestrator | 2025-05-28 20:03:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:18.467347 | orchestrator | 2025-05-28 20:03:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:21.509920 | orchestrator | 2025-05-28 20:03:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:21.510115 | orchestrator | 2025-05-28 20:03:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:24.556536 | orchestrator | 2025-05-28 20:03:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:24.556644 | orchestrator | 2025-05-28 20:03:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:27.603810 | orchestrator | 2025-05-28 20:03:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:27.603942 | orchestrator | 2025-05-28 20:03:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:30.648762 | orchestrator | 2025-05-28 20:03:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:30.649680 | orchestrator | 2025-05-28 20:03:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:33.689488 | orchestrator | 2025-05-28 20:03:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:33.689611 | orchestrator | 2025-05-28 20:03:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:36.734574 | orchestrator | 2025-05-28 20:03:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:36.734682 | orchestrator | 2025-05-28 20:03:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:39.781103 | orchestrator | 2025-05-28 20:03:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:39.781211 | orchestrator | 2025-05-28 20:03:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:42.834787 | orchestrator | 2025-05-28 20:03:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:42.834895 | orchestrator | 2025-05-28 20:03:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:45.882652 | orchestrator | 2025-05-28 20:03:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:45.882742 | orchestrator | 2025-05-28 20:03:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:48.927237 | orchestrator | 2025-05-28 20:03:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:48.927403 | orchestrator | 2025-05-28 20:03:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:51.973723 | orchestrator | 2025-05-28 20:03:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:51.973832 | orchestrator | 2025-05-28 20:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:55.025751 | orchestrator | 2025-05-28 20:03:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:55.025858 | orchestrator | 2025-05-28 20:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:03:58.078988 | orchestrator | 2025-05-28 20:03:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:03:58.079092 | orchestrator | 2025-05-28 20:03:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:01.128065 | orchestrator | 2025-05-28 20:04:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:01.128174 | orchestrator | 2025-05-28 20:04:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:04.179171 | orchestrator | 2025-05-28 20:04:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:04.179351 | orchestrator | 2025-05-28 20:04:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:07.222496 | orchestrator | 2025-05-28 20:04:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:07.222564 | orchestrator | 2025-05-28 20:04:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:10.264641 | orchestrator | 2025-05-28 20:04:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:10.264742 | orchestrator | 2025-05-28 20:04:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:13.315136 | orchestrator | 2025-05-28 20:04:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:13.315339 | orchestrator | 2025-05-28 20:04:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:16.368035 | orchestrator | 2025-05-28 20:04:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:16.368134 | orchestrator | 2025-05-28 20:04:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:19.414991 | orchestrator | 2025-05-28 20:04:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:19.415099 | orchestrator | 2025-05-28 20:04:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:22.458504 | orchestrator | 2025-05-28 20:04:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:22.458609 | orchestrator | 2025-05-28 20:04:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:25.500971 | orchestrator | 2025-05-28 20:04:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:25.501038 | orchestrator | 2025-05-28 20:04:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:28.549829 | orchestrator | 2025-05-28 20:04:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:28.549934 | orchestrator | 2025-05-28 20:04:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:31.611723 | orchestrator | 2025-05-28 20:04:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:31.611835 | orchestrator | 2025-05-28 20:04:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:34.676667 | orchestrator | 2025-05-28 20:04:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:34.676764 | orchestrator | 2025-05-28 20:04:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:37.726172 | orchestrator | 2025-05-28 20:04:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:37.726336 | orchestrator | 2025-05-28 20:04:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:40.772991 | orchestrator | 2025-05-28 20:04:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:40.773099 | orchestrator | 2025-05-28 20:04:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:43.837688 | orchestrator | 2025-05-28 20:04:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:43.837744 | orchestrator | 2025-05-28 20:04:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:46.903586 | orchestrator | 2025-05-28 20:04:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:46.903678 | orchestrator | 2025-05-28 20:04:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:49.955635 | orchestrator | 2025-05-28 20:04:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:49.955768 | orchestrator | 2025-05-28 20:04:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:53.009235 | orchestrator | 2025-05-28 20:04:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:53.009450 | orchestrator | 2025-05-28 20:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:56.049676 | orchestrator | 2025-05-28 20:04:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:56.049780 | orchestrator | 2025-05-28 20:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:04:59.107950 | orchestrator | 2025-05-28 20:04:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:04:59.108082 | orchestrator | 2025-05-28 20:04:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:02.162011 | orchestrator | 2025-05-28 20:05:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:02.162214 | orchestrator | 2025-05-28 20:05:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:05.205551 | orchestrator | 2025-05-28 20:05:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:05.205664 | orchestrator | 2025-05-28 20:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:08.253667 | orchestrator | 2025-05-28 20:05:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:08.253799 | orchestrator | 2025-05-28 20:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:11.304135 | orchestrator | 2025-05-28 20:05:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:11.304246 | orchestrator | 2025-05-28 20:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:14.357368 | orchestrator | 2025-05-28 20:05:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:14.357479 | orchestrator | 2025-05-28 20:05:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:17.422780 | orchestrator | 2025-05-28 20:05:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:17.422885 | orchestrator | 2025-05-28 20:05:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:20.489475 | orchestrator | 2025-05-28 20:05:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:20.489566 | orchestrator | 2025-05-28 20:05:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:23.562564 | orchestrator | 2025-05-28 20:05:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:23.562649 | orchestrator | 2025-05-28 20:05:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:26.627468 | orchestrator | 2025-05-28 20:05:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:26.627552 | orchestrator | 2025-05-28 20:05:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:29.705108 | orchestrator | 2025-05-28 20:05:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:29.705198 | orchestrator | 2025-05-28 20:05:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:32.769295 | orchestrator | 2025-05-28 20:05:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:32.769451 | orchestrator | 2025-05-28 20:05:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:35.831763 | orchestrator | 2025-05-28 20:05:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:35.831865 | orchestrator | 2025-05-28 20:05:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:38.881734 | orchestrator | 2025-05-28 20:05:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:38.881846 | orchestrator | 2025-05-28 20:05:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:41.930253 | orchestrator | 2025-05-28 20:05:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:41.930400 | orchestrator | 2025-05-28 20:05:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:44.977869 | orchestrator | 2025-05-28 20:05:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:44.977992 | orchestrator | 2025-05-28 20:05:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:48.036439 | orchestrator | 2025-05-28 20:05:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:48.036539 | orchestrator | 2025-05-28 20:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:51.083399 | orchestrator | 2025-05-28 20:05:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:51.083510 | orchestrator | 2025-05-28 20:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:54.124820 | orchestrator | 2025-05-28 20:05:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:54.124905 | orchestrator | 2025-05-28 20:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:05:57.173493 | orchestrator | 2025-05-28 20:05:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:05:57.173605 | orchestrator | 2025-05-28 20:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:00.216588 | orchestrator | 2025-05-28 20:06:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:00.216694 | orchestrator | 2025-05-28 20:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:03.254678 | orchestrator | 2025-05-28 20:06:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:03.254802 | orchestrator | 2025-05-28 20:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:06.302108 | orchestrator | 2025-05-28 20:06:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:06.302244 | orchestrator | 2025-05-28 20:06:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:09.351817 | orchestrator | 2025-05-28 20:06:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:09.351922 | orchestrator | 2025-05-28 20:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:12.404619 | orchestrator | 2025-05-28 20:06:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:12.404694 | orchestrator | 2025-05-28 20:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:15.458586 | orchestrator | 2025-05-28 20:06:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:15.458694 | orchestrator | 2025-05-28 20:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:18.513772 | orchestrator | 2025-05-28 20:06:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:18.513876 | orchestrator | 2025-05-28 20:06:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:21.570082 | orchestrator | 2025-05-28 20:06:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:21.570173 | orchestrator | 2025-05-28 20:06:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:24.621685 | orchestrator | 2025-05-28 20:06:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:24.621777 | orchestrator | 2025-05-28 20:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:27.675540 | orchestrator | 2025-05-28 20:06:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:27.675650 | orchestrator | 2025-05-28 20:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:30.730481 | orchestrator | 2025-05-28 20:06:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:30.730615 | orchestrator | 2025-05-28 20:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:33.781425 | orchestrator | 2025-05-28 20:06:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:33.781542 | orchestrator | 2025-05-28 20:06:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:36.834293 | orchestrator | 2025-05-28 20:06:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:36.834439 | orchestrator | 2025-05-28 20:06:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:39.884590 | orchestrator | 2025-05-28 20:06:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:39.884698 | orchestrator | 2025-05-28 20:06:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:42.938607 | orchestrator | 2025-05-28 20:06:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:42.938715 | orchestrator | 2025-05-28 20:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:45.995949 | orchestrator | 2025-05-28 20:06:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:45.996052 | orchestrator | 2025-05-28 20:06:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:49.042909 | orchestrator | 2025-05-28 20:06:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:49.043011 | orchestrator | 2025-05-28 20:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:52.086585 | orchestrator | 2025-05-28 20:06:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:52.086724 | orchestrator | 2025-05-28 20:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:55.129862 | orchestrator | 2025-05-28 20:06:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:55.129967 | orchestrator | 2025-05-28 20:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:06:58.178912 | orchestrator | 2025-05-28 20:06:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:06:58.179286 | orchestrator | 2025-05-28 20:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:01.228467 | orchestrator | 2025-05-28 20:07:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:01.228572 | orchestrator | 2025-05-28 20:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:04.275057 | orchestrator | 2025-05-28 20:07:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:04.275165 | orchestrator | 2025-05-28 20:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:07.325683 | orchestrator | 2025-05-28 20:07:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:07.325783 | orchestrator | 2025-05-28 20:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:10.373491 | orchestrator | 2025-05-28 20:07:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:10.373577 | orchestrator | 2025-05-28 20:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:13.419755 | orchestrator | 2025-05-28 20:07:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:13.419861 | orchestrator | 2025-05-28 20:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:16.468241 | orchestrator | 2025-05-28 20:07:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:16.469319 | orchestrator | 2025-05-28 20:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:19.516167 | orchestrator | 2025-05-28 20:07:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:19.516297 | orchestrator | 2025-05-28 20:07:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:22.561729 | orchestrator | 2025-05-28 20:07:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:22.561819 | orchestrator | 2025-05-28 20:07:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:25.605706 | orchestrator | 2025-05-28 20:07:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:25.605809 | orchestrator | 2025-05-28 20:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:28.665010 | orchestrator | 2025-05-28 20:07:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:28.665119 | orchestrator | 2025-05-28 20:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:31.715646 | orchestrator | 2025-05-28 20:07:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:31.715744 | orchestrator | 2025-05-28 20:07:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:34.763346 | orchestrator | 2025-05-28 20:07:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:34.763524 | orchestrator | 2025-05-28 20:07:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:37.814313 | orchestrator | 2025-05-28 20:07:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:37.814519 | orchestrator | 2025-05-28 20:07:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:40.862775 | orchestrator | 2025-05-28 20:07:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:40.862879 | orchestrator | 2025-05-28 20:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:43.914343 | orchestrator | 2025-05-28 20:07:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:43.914512 | orchestrator | 2025-05-28 20:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:46.963708 | orchestrator | 2025-05-28 20:07:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:46.963810 | orchestrator | 2025-05-28 20:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:50.018012 | orchestrator | 2025-05-28 20:07:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:50.018178 | orchestrator | 2025-05-28 20:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:53.069681 | orchestrator | 2025-05-28 20:07:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:53.069790 | orchestrator | 2025-05-28 20:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:56.133101 | orchestrator | 2025-05-28 20:07:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:56.133208 | orchestrator | 2025-05-28 20:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:07:59.178691 | orchestrator | 2025-05-28 20:07:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:07:59.178799 | orchestrator | 2025-05-28 20:07:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:02.228455 | orchestrator | 2025-05-28 20:08:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:02.228558 | orchestrator | 2025-05-28 20:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:05.281764 | orchestrator | 2025-05-28 20:08:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:05.281907 | orchestrator | 2025-05-28 20:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:08.333721 | orchestrator | 2025-05-28 20:08:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:08.333851 | orchestrator | 2025-05-28 20:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:11.383598 | orchestrator | 2025-05-28 20:08:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:11.383708 | orchestrator | 2025-05-28 20:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:14.434790 | orchestrator | 2025-05-28 20:08:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:14.435009 | orchestrator | 2025-05-28 20:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:17.489605 | orchestrator | 2025-05-28 20:08:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:17.489713 | orchestrator | 2025-05-28 20:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:20.538311 | orchestrator | 2025-05-28 20:08:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:20.538452 | orchestrator | 2025-05-28 20:08:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:23.593794 | orchestrator | 2025-05-28 20:08:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:23.593899 | orchestrator | 2025-05-28 20:08:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:26.649354 | orchestrator | 2025-05-28 20:08:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:26.649518 | orchestrator | 2025-05-28 20:08:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:29.700725 | orchestrator | 2025-05-28 20:08:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:29.700823 | orchestrator | 2025-05-28 20:08:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:32.749157 | orchestrator | 2025-05-28 20:08:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:32.749260 | orchestrator | 2025-05-28 20:08:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:35.798693 | orchestrator | 2025-05-28 20:08:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:35.798805 | orchestrator | 2025-05-28 20:08:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:38.854654 | orchestrator | 2025-05-28 20:08:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:38.854756 | orchestrator | 2025-05-28 20:08:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:41.912909 | orchestrator | 2025-05-28 20:08:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:41.913023 | orchestrator | 2025-05-28 20:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:44.959788 | orchestrator | 2025-05-28 20:08:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:44.959880 | orchestrator | 2025-05-28 20:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:48.011918 | orchestrator | 2025-05-28 20:08:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:48.012022 | orchestrator | 2025-05-28 20:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:51.064847 | orchestrator | 2025-05-28 20:08:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:51.064952 | orchestrator | 2025-05-28 20:08:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:54.111036 | orchestrator | 2025-05-28 20:08:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:54.111144 | orchestrator | 2025-05-28 20:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:08:57.161302 | orchestrator | 2025-05-28 20:08:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:08:57.161575 | orchestrator | 2025-05-28 20:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:00.212893 | orchestrator | 2025-05-28 20:09:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:00.213001 | orchestrator | 2025-05-28 20:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:03.260263 | orchestrator | 2025-05-28 20:09:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:03.260365 | orchestrator | 2025-05-28 20:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:06.308730 | orchestrator | 2025-05-28 20:09:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:06.308862 | orchestrator | 2025-05-28 20:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:09.358907 | orchestrator | 2025-05-28 20:09:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:09.359006 | orchestrator | 2025-05-28 20:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:12.415833 | orchestrator | 2025-05-28 20:09:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:12.415965 | orchestrator | 2025-05-28 20:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:15.474063 | orchestrator | 2025-05-28 20:09:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:15.474160 | orchestrator | 2025-05-28 20:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:18.525898 | orchestrator | 2025-05-28 20:09:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:18.526004 | orchestrator | 2025-05-28 20:09:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:21.574695 | orchestrator | 2025-05-28 20:09:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:21.574784 | orchestrator | 2025-05-28 20:09:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:24.625411 | orchestrator | 2025-05-28 20:09:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:24.625560 | orchestrator | 2025-05-28 20:09:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:27.668884 | orchestrator | 2025-05-28 20:09:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:27.668988 | orchestrator | 2025-05-28 20:09:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:30.724822 | orchestrator | 2025-05-28 20:09:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:30.724908 | orchestrator | 2025-05-28 20:09:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:33.791884 | orchestrator | 2025-05-28 20:09:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:33.792000 | orchestrator | 2025-05-28 20:09:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:36.842356 | orchestrator | 2025-05-28 20:09:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:36.842494 | orchestrator | 2025-05-28 20:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:39.894608 | orchestrator | 2025-05-28 20:09:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:39.894702 | orchestrator | 2025-05-28 20:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:42.963881 | orchestrator | 2025-05-28 20:09:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:42.963992 | orchestrator | 2025-05-28 20:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:46.017881 | orchestrator | 2025-05-28 20:09:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:46.017993 | orchestrator | 2025-05-28 20:09:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:49.071048 | orchestrator | 2025-05-28 20:09:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:49.071185 | orchestrator | 2025-05-28 20:09:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:52.132990 | orchestrator | 2025-05-28 20:09:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:52.133092 | orchestrator | 2025-05-28 20:09:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:55.197222 | orchestrator | 2025-05-28 20:09:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:55.197316 | orchestrator | 2025-05-28 20:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:09:58.247547 | orchestrator | 2025-05-28 20:09:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:09:58.247634 | orchestrator | 2025-05-28 20:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:01.298746 | orchestrator | 2025-05-28 20:10:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:01.298845 | orchestrator | 2025-05-28 20:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:04.360687 | orchestrator | 2025-05-28 20:10:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:04.360767 | orchestrator | 2025-05-28 20:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:07.410853 | orchestrator | 2025-05-28 20:10:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:07.410950 | orchestrator | 2025-05-28 20:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:10.463683 | orchestrator | 2025-05-28 20:10:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:10.463770 | orchestrator | 2025-05-28 20:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:13.512917 | orchestrator | 2025-05-28 20:10:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:13.513099 | orchestrator | 2025-05-28 20:10:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:16.568130 | orchestrator | 2025-05-28 20:10:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:16.568259 | orchestrator | 2025-05-28 20:10:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:19.622005 | orchestrator | 2025-05-28 20:10:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:19.622149 | orchestrator | 2025-05-28 20:10:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:22.667704 | orchestrator | 2025-05-28 20:10:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:22.667807 | orchestrator | 2025-05-28 20:10:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:25.729666 | orchestrator | 2025-05-28 20:10:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:25.729841 | orchestrator | 2025-05-28 20:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:28.778901 | orchestrator | 2025-05-28 20:10:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:28.779035 | orchestrator | 2025-05-28 20:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:31.823024 | orchestrator | 2025-05-28 20:10:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:31.823139 | orchestrator | 2025-05-28 20:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:34.876542 | orchestrator | 2025-05-28 20:10:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:34.876641 | orchestrator | 2025-05-28 20:10:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:37.927411 | orchestrator | 2025-05-28 20:10:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:37.927584 | orchestrator | 2025-05-28 20:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:40.972799 | orchestrator | 2025-05-28 20:10:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:40.972907 | orchestrator | 2025-05-28 20:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:44.019953 | orchestrator | 2025-05-28 20:10:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:44.020055 | orchestrator | 2025-05-28 20:10:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:47.067902 | orchestrator | 2025-05-28 20:10:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:47.068008 | orchestrator | 2025-05-28 20:10:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:50.113772 | orchestrator | 2025-05-28 20:10:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:50.113865 | orchestrator | 2025-05-28 20:10:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:53.165767 | orchestrator | 2025-05-28 20:10:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:53.165867 | orchestrator | 2025-05-28 20:10:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:56.218713 | orchestrator | 2025-05-28 20:10:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:56.218804 | orchestrator | 2025-05-28 20:10:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:10:59.263573 | orchestrator | 2025-05-28 20:10:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:10:59.263685 | orchestrator | 2025-05-28 20:10:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:02.312035 | orchestrator | 2025-05-28 20:11:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:02.312144 | orchestrator | 2025-05-28 20:11:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:05.356391 | orchestrator | 2025-05-28 20:11:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:05.356531 | orchestrator | 2025-05-28 20:11:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:08.406995 | orchestrator | 2025-05-28 20:11:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:08.407124 | orchestrator | 2025-05-28 20:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:11.464853 | orchestrator | 2025-05-28 20:11:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:11.464957 | orchestrator | 2025-05-28 20:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:14.518386 | orchestrator | 2025-05-28 20:11:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:14.518534 | orchestrator | 2025-05-28 20:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:17.572004 | orchestrator | 2025-05-28 20:11:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:17.572100 | orchestrator | 2025-05-28 20:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:20.614821 | orchestrator | 2025-05-28 20:11:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:20.614937 | orchestrator | 2025-05-28 20:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:23.665959 | orchestrator | 2025-05-28 20:11:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:23.666137 | orchestrator | 2025-05-28 20:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:26.713736 | orchestrator | 2025-05-28 20:11:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:26.713834 | orchestrator | 2025-05-28 20:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:29.764804 | orchestrator | 2025-05-28 20:11:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:29.764910 | orchestrator | 2025-05-28 20:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:32.816394 | orchestrator | 2025-05-28 20:11:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:32.816539 | orchestrator | 2025-05-28 20:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:35.864819 | orchestrator | 2025-05-28 20:11:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:35.864906 | orchestrator | 2025-05-28 20:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:38.915334 | orchestrator | 2025-05-28 20:11:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:38.915440 | orchestrator | 2025-05-28 20:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:41.959920 | orchestrator | 2025-05-28 20:11:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:41.960026 | orchestrator | 2025-05-28 20:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:45.010102 | orchestrator | 2025-05-28 20:11:45 | INFO  | Task c6d0eddc-720c-452e-9a15-b0b1dd9337cb is in state STARTED 2025-05-28 20:11:45.014224 | orchestrator | 2025-05-28 20:11:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:45.014266 | orchestrator | 2025-05-28 20:11:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:48.075125 | orchestrator | 2025-05-28 20:11:48 | INFO  | Task c6d0eddc-720c-452e-9a15-b0b1dd9337cb is in state STARTED 2025-05-28 20:11:48.075685 | orchestrator | 2025-05-28 20:11:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:48.075971 | orchestrator | 2025-05-28 20:11:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:51.127751 | orchestrator | 2025-05-28 20:11:51 | INFO  | Task c6d0eddc-720c-452e-9a15-b0b1dd9337cb is in state STARTED 2025-05-28 20:11:51.128714 | orchestrator | 2025-05-28 20:11:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:51.128799 | orchestrator | 2025-05-28 20:11:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:54.188622 | orchestrator | 2025-05-28 20:11:54 | INFO  | Task c6d0eddc-720c-452e-9a15-b0b1dd9337cb is in state STARTED 2025-05-28 20:11:54.190008 | orchestrator | 2025-05-28 20:11:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:54.190101 | orchestrator | 2025-05-28 20:11:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:11:57.243428 | orchestrator | 2025-05-28 20:11:57 | INFO  | Task c6d0eddc-720c-452e-9a15-b0b1dd9337cb is in state SUCCESS 2025-05-28 20:11:57.244303 | orchestrator | 2025-05-28 20:11:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:11:57.244338 | orchestrator | 2025-05-28 20:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:00.302192 | orchestrator | 2025-05-28 20:12:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:00.302291 | orchestrator | 2025-05-28 20:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:03.354372 | orchestrator | 2025-05-28 20:12:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:03.354479 | orchestrator | 2025-05-28 20:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:06.409574 | orchestrator | 2025-05-28 20:12:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:06.409678 | orchestrator | 2025-05-28 20:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:09.458613 | orchestrator | 2025-05-28 20:12:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:09.458738 | orchestrator | 2025-05-28 20:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:12.504908 | orchestrator | 2025-05-28 20:12:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:12.505023 | orchestrator | 2025-05-28 20:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:15.548414 | orchestrator | 2025-05-28 20:12:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:15.548594 | orchestrator | 2025-05-28 20:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:18.604802 | orchestrator | 2025-05-28 20:12:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:18.604907 | orchestrator | 2025-05-28 20:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:21.655635 | orchestrator | 2025-05-28 20:12:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:21.655756 | orchestrator | 2025-05-28 20:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:24.711115 | orchestrator | 2025-05-28 20:12:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:24.711214 | orchestrator | 2025-05-28 20:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:27.765340 | orchestrator | 2025-05-28 20:12:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:27.765434 | orchestrator | 2025-05-28 20:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:30.811381 | orchestrator | 2025-05-28 20:12:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:30.811493 | orchestrator | 2025-05-28 20:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:33.859867 | orchestrator | 2025-05-28 20:12:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:33.859957 | orchestrator | 2025-05-28 20:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:36.909880 | orchestrator | 2025-05-28 20:12:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:36.909979 | orchestrator | 2025-05-28 20:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:39.965132 | orchestrator | 2025-05-28 20:12:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:39.965256 | orchestrator | 2025-05-28 20:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:43.018961 | orchestrator | 2025-05-28 20:12:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:43.019076 | orchestrator | 2025-05-28 20:12:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:46.067384 | orchestrator | 2025-05-28 20:12:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:46.067493 | orchestrator | 2025-05-28 20:12:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:49.121894 | orchestrator | 2025-05-28 20:12:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:49.122005 | orchestrator | 2025-05-28 20:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:52.164128 | orchestrator | 2025-05-28 20:12:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:52.164239 | orchestrator | 2025-05-28 20:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:55.203939 | orchestrator | 2025-05-28 20:12:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:55.204023 | orchestrator | 2025-05-28 20:12:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:12:58.256097 | orchestrator | 2025-05-28 20:12:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:12:58.256200 | orchestrator | 2025-05-28 20:12:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:01.303620 | orchestrator | 2025-05-28 20:13:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:01.303717 | orchestrator | 2025-05-28 20:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:04.353229 | orchestrator | 2025-05-28 20:13:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:04.353344 | orchestrator | 2025-05-28 20:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:07.404244 | orchestrator | 2025-05-28 20:13:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:07.404352 | orchestrator | 2025-05-28 20:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:10.456555 | orchestrator | 2025-05-28 20:13:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:10.456645 | orchestrator | 2025-05-28 20:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:13.504110 | orchestrator | 2025-05-28 20:13:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:13.504195 | orchestrator | 2025-05-28 20:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:16.548297 | orchestrator | 2025-05-28 20:13:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:16.548380 | orchestrator | 2025-05-28 20:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:19.596924 | orchestrator | 2025-05-28 20:13:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:19.597043 | orchestrator | 2025-05-28 20:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:22.651601 | orchestrator | 2025-05-28 20:13:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:22.651710 | orchestrator | 2025-05-28 20:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:25.706262 | orchestrator | 2025-05-28 20:13:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:25.706433 | orchestrator | 2025-05-28 20:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:28.758857 | orchestrator | 2025-05-28 20:13:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:28.758983 | orchestrator | 2025-05-28 20:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:31.814383 | orchestrator | 2025-05-28 20:13:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:31.814490 | orchestrator | 2025-05-28 20:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:34.869957 | orchestrator | 2025-05-28 20:13:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:34.870122 | orchestrator | 2025-05-28 20:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:37.922878 | orchestrator | 2025-05-28 20:13:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:37.923059 | orchestrator | 2025-05-28 20:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:40.976828 | orchestrator | 2025-05-28 20:13:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:40.976930 | orchestrator | 2025-05-28 20:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:44.029156 | orchestrator | 2025-05-28 20:13:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:44.029236 | orchestrator | 2025-05-28 20:13:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:47.081829 | orchestrator | 2025-05-28 20:13:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:47.082307 | orchestrator | 2025-05-28 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:50.143070 | orchestrator | 2025-05-28 20:13:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:50.143207 | orchestrator | 2025-05-28 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:53.197091 | orchestrator | 2025-05-28 20:13:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:53.197200 | orchestrator | 2025-05-28 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:56.240619 | orchestrator | 2025-05-28 20:13:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:56.240729 | orchestrator | 2025-05-28 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:13:59.291076 | orchestrator | 2025-05-28 20:13:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:13:59.291181 | orchestrator | 2025-05-28 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:02.339412 | orchestrator | 2025-05-28 20:14:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:02.339521 | orchestrator | 2025-05-28 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:05.392187 | orchestrator | 2025-05-28 20:14:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:05.392295 | orchestrator | 2025-05-28 20:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:08.446768 | orchestrator | 2025-05-28 20:14:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:08.446877 | orchestrator | 2025-05-28 20:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:11.494112 | orchestrator | 2025-05-28 20:14:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:11.494201 | orchestrator | 2025-05-28 20:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:14.571974 | orchestrator | 2025-05-28 20:14:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:14.572037 | orchestrator | 2025-05-28 20:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:17.625344 | orchestrator | 2025-05-28 20:14:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:17.625445 | orchestrator | 2025-05-28 20:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:20.680243 | orchestrator | 2025-05-28 20:14:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:20.680316 | orchestrator | 2025-05-28 20:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:23.731945 | orchestrator | 2025-05-28 20:14:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:23.732045 | orchestrator | 2025-05-28 20:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:26.781090 | orchestrator | 2025-05-28 20:14:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:26.781199 | orchestrator | 2025-05-28 20:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:29.832815 | orchestrator | 2025-05-28 20:14:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:29.832895 | orchestrator | 2025-05-28 20:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:32.882203 | orchestrator | 2025-05-28 20:14:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:32.882311 | orchestrator | 2025-05-28 20:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:35.934433 | orchestrator | 2025-05-28 20:14:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:35.934577 | orchestrator | 2025-05-28 20:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:38.980890 | orchestrator | 2025-05-28 20:14:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:38.981022 | orchestrator | 2025-05-28 20:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:42.038098 | orchestrator | 2025-05-28 20:14:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:42.038209 | orchestrator | 2025-05-28 20:14:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:45.089268 | orchestrator | 2025-05-28 20:14:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:45.089380 | orchestrator | 2025-05-28 20:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:48.136781 | orchestrator | 2025-05-28 20:14:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:48.136860 | orchestrator | 2025-05-28 20:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:51.190091 | orchestrator | 2025-05-28 20:14:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:51.190184 | orchestrator | 2025-05-28 20:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:54.238561 | orchestrator | 2025-05-28 20:14:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:54.238648 | orchestrator | 2025-05-28 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:14:57.286359 | orchestrator | 2025-05-28 20:14:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:14:57.286467 | orchestrator | 2025-05-28 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:00.337297 | orchestrator | 2025-05-28 20:15:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:00.337401 | orchestrator | 2025-05-28 20:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:03.384442 | orchestrator | 2025-05-28 20:15:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:03.384548 | orchestrator | 2025-05-28 20:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:06.429941 | orchestrator | 2025-05-28 20:15:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:06.430197 | orchestrator | 2025-05-28 20:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:09.478355 | orchestrator | 2025-05-28 20:15:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:09.478447 | orchestrator | 2025-05-28 20:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:12.519096 | orchestrator | 2025-05-28 20:15:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:12.519185 | orchestrator | 2025-05-28 20:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:15.564032 | orchestrator | 2025-05-28 20:15:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:15.564148 | orchestrator | 2025-05-28 20:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:18.617194 | orchestrator | 2025-05-28 20:15:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:18.617295 | orchestrator | 2025-05-28 20:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:21.676881 | orchestrator | 2025-05-28 20:15:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:21.676982 | orchestrator | 2025-05-28 20:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:24.729204 | orchestrator | 2025-05-28 20:15:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:24.729276 | orchestrator | 2025-05-28 20:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:27.782933 | orchestrator | 2025-05-28 20:15:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:27.783059 | orchestrator | 2025-05-28 20:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:30.839526 | orchestrator | 2025-05-28 20:15:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:30.839627 | orchestrator | 2025-05-28 20:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:33.891300 | orchestrator | 2025-05-28 20:15:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:33.891399 | orchestrator | 2025-05-28 20:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:36.952448 | orchestrator | 2025-05-28 20:15:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:36.952552 | orchestrator | 2025-05-28 20:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:40.012108 | orchestrator | 2025-05-28 20:15:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:40.012225 | orchestrator | 2025-05-28 20:15:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:43.065255 | orchestrator | 2025-05-28 20:15:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:43.065353 | orchestrator | 2025-05-28 20:15:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:46.115572 | orchestrator | 2025-05-28 20:15:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:46.115691 | orchestrator | 2025-05-28 20:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:49.158478 | orchestrator | 2025-05-28 20:15:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:49.158566 | orchestrator | 2025-05-28 20:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:52.210263 | orchestrator | 2025-05-28 20:15:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:52.210367 | orchestrator | 2025-05-28 20:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:55.254716 | orchestrator | 2025-05-28 20:15:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:55.254871 | orchestrator | 2025-05-28 20:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:15:58.301305 | orchestrator | 2025-05-28 20:15:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:15:58.301412 | orchestrator | 2025-05-28 20:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:01.350181 | orchestrator | 2025-05-28 20:16:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:01.350279 | orchestrator | 2025-05-28 20:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:04.397168 | orchestrator | 2025-05-28 20:16:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:04.397304 | orchestrator | 2025-05-28 20:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:07.447951 | orchestrator | 2025-05-28 20:16:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:07.448054 | orchestrator | 2025-05-28 20:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:10.498140 | orchestrator | 2025-05-28 20:16:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:10.498245 | orchestrator | 2025-05-28 20:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:13.547154 | orchestrator | 2025-05-28 20:16:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:13.547256 | orchestrator | 2025-05-28 20:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:16.592795 | orchestrator | 2025-05-28 20:16:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:16.592910 | orchestrator | 2025-05-28 20:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:19.641722 | orchestrator | 2025-05-28 20:16:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:19.641894 | orchestrator | 2025-05-28 20:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:22.696896 | orchestrator | 2025-05-28 20:16:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:22.697002 | orchestrator | 2025-05-28 20:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:25.744525 | orchestrator | 2025-05-28 20:16:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:25.744629 | orchestrator | 2025-05-28 20:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:28.792925 | orchestrator | 2025-05-28 20:16:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:28.793020 | orchestrator | 2025-05-28 20:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:31.841406 | orchestrator | 2025-05-28 20:16:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:31.841486 | orchestrator | 2025-05-28 20:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:34.886788 | orchestrator | 2025-05-28 20:16:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:34.886909 | orchestrator | 2025-05-28 20:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:37.929174 | orchestrator | 2025-05-28 20:16:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:37.929304 | orchestrator | 2025-05-28 20:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:40.982331 | orchestrator | 2025-05-28 20:16:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:40.982435 | orchestrator | 2025-05-28 20:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:44.035679 | orchestrator | 2025-05-28 20:16:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:44.035818 | orchestrator | 2025-05-28 20:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:47.075288 | orchestrator | 2025-05-28 20:16:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:47.075376 | orchestrator | 2025-05-28 20:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:50.117706 | orchestrator | 2025-05-28 20:16:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:50.117901 | orchestrator | 2025-05-28 20:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:53.168114 | orchestrator | 2025-05-28 20:16:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:53.168209 | orchestrator | 2025-05-28 20:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:56.217012 | orchestrator | 2025-05-28 20:16:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:56.217140 | orchestrator | 2025-05-28 20:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:16:59.262978 | orchestrator | 2025-05-28 20:16:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:16:59.263094 | orchestrator | 2025-05-28 20:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:02.309599 | orchestrator | 2025-05-28 20:17:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:02.309719 | orchestrator | 2025-05-28 20:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:05.394406 | orchestrator | 2025-05-28 20:17:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:05.394495 | orchestrator | 2025-05-28 20:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:08.448821 | orchestrator | 2025-05-28 20:17:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:08.448916 | orchestrator | 2025-05-28 20:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:11.501728 | orchestrator | 2025-05-28 20:17:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:11.501874 | orchestrator | 2025-05-28 20:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:14.551882 | orchestrator | 2025-05-28 20:17:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:14.551986 | orchestrator | 2025-05-28 20:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:17.602325 | orchestrator | 2025-05-28 20:17:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:17.602428 | orchestrator | 2025-05-28 20:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:20.654338 | orchestrator | 2025-05-28 20:17:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:20.654463 | orchestrator | 2025-05-28 20:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:23.708233 | orchestrator | 2025-05-28 20:17:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:23.708376 | orchestrator | 2025-05-28 20:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:26.757090 | orchestrator | 2025-05-28 20:17:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:26.757178 | orchestrator | 2025-05-28 20:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:29.811141 | orchestrator | 2025-05-28 20:17:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:29.811246 | orchestrator | 2025-05-28 20:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:32.863050 | orchestrator | 2025-05-28 20:17:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:32.863152 | orchestrator | 2025-05-28 20:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:35.927622 | orchestrator | 2025-05-28 20:17:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:35.927727 | orchestrator | 2025-05-28 20:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:38.984867 | orchestrator | 2025-05-28 20:17:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:38.984976 | orchestrator | 2025-05-28 20:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:42.037164 | orchestrator | 2025-05-28 20:17:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:42.037272 | orchestrator | 2025-05-28 20:17:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:45.087322 | orchestrator | 2025-05-28 20:17:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:45.087449 | orchestrator | 2025-05-28 20:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:48.138261 | orchestrator | 2025-05-28 20:17:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:48.138349 | orchestrator | 2025-05-28 20:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:51.197384 | orchestrator | 2025-05-28 20:17:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:51.197522 | orchestrator | 2025-05-28 20:17:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:54.250134 | orchestrator | 2025-05-28 20:17:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:54.250249 | orchestrator | 2025-05-28 20:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:17:57.293037 | orchestrator | 2025-05-28 20:17:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:17:57.293142 | orchestrator | 2025-05-28 20:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:00.345859 | orchestrator | 2025-05-28 20:18:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:00.346003 | orchestrator | 2025-05-28 20:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:03.403382 | orchestrator | 2025-05-28 20:18:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:03.403553 | orchestrator | 2025-05-28 20:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:06.456104 | orchestrator | 2025-05-28 20:18:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:06.456193 | orchestrator | 2025-05-28 20:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:09.502942 | orchestrator | 2025-05-28 20:18:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:09.503050 | orchestrator | 2025-05-28 20:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:12.573219 | orchestrator | 2025-05-28 20:18:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:12.573325 | orchestrator | 2025-05-28 20:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:15.621665 | orchestrator | 2025-05-28 20:18:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:15.621765 | orchestrator | 2025-05-28 20:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:18.671767 | orchestrator | 2025-05-28 20:18:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:18.671938 | orchestrator | 2025-05-28 20:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:21.723591 | orchestrator | 2025-05-28 20:18:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:21.723702 | orchestrator | 2025-05-28 20:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:24.775377 | orchestrator | 2025-05-28 20:18:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:24.775489 | orchestrator | 2025-05-28 20:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:27.824991 | orchestrator | 2025-05-28 20:18:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:27.825116 | orchestrator | 2025-05-28 20:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:30.873568 | orchestrator | 2025-05-28 20:18:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:30.873701 | orchestrator | 2025-05-28 20:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:33.919883 | orchestrator | 2025-05-28 20:18:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:33.919970 | orchestrator | 2025-05-28 20:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:36.968301 | orchestrator | 2025-05-28 20:18:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:36.968421 | orchestrator | 2025-05-28 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:40.033708 | orchestrator | 2025-05-28 20:18:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:40.033872 | orchestrator | 2025-05-28 20:18:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:43.077293 | orchestrator | 2025-05-28 20:18:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:43.077387 | orchestrator | 2025-05-28 20:18:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:46.127132 | orchestrator | 2025-05-28 20:18:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:46.127222 | orchestrator | 2025-05-28 20:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:49.176503 | orchestrator | 2025-05-28 20:18:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:49.176614 | orchestrator | 2025-05-28 20:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:52.228569 | orchestrator | 2025-05-28 20:18:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:52.228675 | orchestrator | 2025-05-28 20:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:55.281911 | orchestrator | 2025-05-28 20:18:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:55.282079 | orchestrator | 2025-05-28 20:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:18:58.336055 | orchestrator | 2025-05-28 20:18:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:18:58.336199 | orchestrator | 2025-05-28 20:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:01.394538 | orchestrator | 2025-05-28 20:19:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:01.394649 | orchestrator | 2025-05-28 20:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:04.451408 | orchestrator | 2025-05-28 20:19:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:04.451494 | orchestrator | 2025-05-28 20:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:07.507072 | orchestrator | 2025-05-28 20:19:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:07.507172 | orchestrator | 2025-05-28 20:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:10.559883 | orchestrator | 2025-05-28 20:19:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:10.559990 | orchestrator | 2025-05-28 20:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:13.617826 | orchestrator | 2025-05-28 20:19:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:13.617932 | orchestrator | 2025-05-28 20:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:16.673820 | orchestrator | 2025-05-28 20:19:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:16.673974 | orchestrator | 2025-05-28 20:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:19.723232 | orchestrator | 2025-05-28 20:19:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:19.723340 | orchestrator | 2025-05-28 20:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:22.774242 | orchestrator | 2025-05-28 20:19:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:22.774531 | orchestrator | 2025-05-28 20:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:25.823921 | orchestrator | 2025-05-28 20:19:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:25.824009 | orchestrator | 2025-05-28 20:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:28.862334 | orchestrator | 2025-05-28 20:19:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:28.862435 | orchestrator | 2025-05-28 20:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:31.918393 | orchestrator | 2025-05-28 20:19:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:31.918508 | orchestrator | 2025-05-28 20:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:34.963494 | orchestrator | 2025-05-28 20:19:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:34.963579 | orchestrator | 2025-05-28 20:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:38.016436 | orchestrator | 2025-05-28 20:19:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:38.016547 | orchestrator | 2025-05-28 20:19:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:41.063783 | orchestrator | 2025-05-28 20:19:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:41.063883 | orchestrator | 2025-05-28 20:19:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:44.116589 | orchestrator | 2025-05-28 20:19:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:44.116690 | orchestrator | 2025-05-28 20:19:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:47.166211 | orchestrator | 2025-05-28 20:19:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:47.166306 | orchestrator | 2025-05-28 20:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:50.211658 | orchestrator | 2025-05-28 20:19:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:50.211825 | orchestrator | 2025-05-28 20:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:53.253782 | orchestrator | 2025-05-28 20:19:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:53.253896 | orchestrator | 2025-05-28 20:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:56.298301 | orchestrator | 2025-05-28 20:19:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:56.298413 | orchestrator | 2025-05-28 20:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:19:59.352228 | orchestrator | 2025-05-28 20:19:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:19:59.352327 | orchestrator | 2025-05-28 20:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:02.406938 | orchestrator | 2025-05-28 20:20:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:02.407064 | orchestrator | 2025-05-28 20:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:05.462846 | orchestrator | 2025-05-28 20:20:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:05.462946 | orchestrator | 2025-05-28 20:20:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:08.515789 | orchestrator | 2025-05-28 20:20:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:08.515909 | orchestrator | 2025-05-28 20:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:11.561526 | orchestrator | 2025-05-28 20:20:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:11.561628 | orchestrator | 2025-05-28 20:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:14.613439 | orchestrator | 2025-05-28 20:20:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:14.613567 | orchestrator | 2025-05-28 20:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:17.661339 | orchestrator | 2025-05-28 20:20:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:17.661409 | orchestrator | 2025-05-28 20:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:20.709425 | orchestrator | 2025-05-28 20:20:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:20.709515 | orchestrator | 2025-05-28 20:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:23.755770 | orchestrator | 2025-05-28 20:20:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:23.755892 | orchestrator | 2025-05-28 20:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:26.806184 | orchestrator | 2025-05-28 20:20:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:26.806278 | orchestrator | 2025-05-28 20:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:29.863348 | orchestrator | 2025-05-28 20:20:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:29.863454 | orchestrator | 2025-05-28 20:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:32.911406 | orchestrator | 2025-05-28 20:20:32 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:32.911517 | orchestrator | 2025-05-28 20:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:35.957536 | orchestrator | 2025-05-28 20:20:35 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:35.957641 | orchestrator | 2025-05-28 20:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:39.005193 | orchestrator | 2025-05-28 20:20:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:39.005305 | orchestrator | 2025-05-28 20:20:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:42.048918 | orchestrator | 2025-05-28 20:20:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:42.049001 | orchestrator | 2025-05-28 20:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:45.097292 | orchestrator | 2025-05-28 20:20:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:45.097362 | orchestrator | 2025-05-28 20:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:48.152253 | orchestrator | 2025-05-28 20:20:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:48.152390 | orchestrator | 2025-05-28 20:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:51.204173 | orchestrator | 2025-05-28 20:20:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:51.204293 | orchestrator | 2025-05-28 20:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:54.255782 | orchestrator | 2025-05-28 20:20:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:54.255891 | orchestrator | 2025-05-28 20:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:20:57.299975 | orchestrator | 2025-05-28 20:20:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:20:57.300085 | orchestrator | 2025-05-28 20:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:00.355637 | orchestrator | 2025-05-28 20:21:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:00.355763 | orchestrator | 2025-05-28 20:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:03.410189 | orchestrator | 2025-05-28 20:21:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:03.410301 | orchestrator | 2025-05-28 20:21:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:06.459038 | orchestrator | 2025-05-28 20:21:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:06.459146 | orchestrator | 2025-05-28 20:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:09.510501 | orchestrator | 2025-05-28 20:21:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:09.510612 | orchestrator | 2025-05-28 20:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:12.553708 | orchestrator | 2025-05-28 20:21:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:12.553957 | orchestrator | 2025-05-28 20:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:15.594570 | orchestrator | 2025-05-28 20:21:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:15.594691 | orchestrator | 2025-05-28 20:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:18.644807 | orchestrator | 2025-05-28 20:21:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:18.644921 | orchestrator | 2025-05-28 20:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:21.692497 | orchestrator | 2025-05-28 20:21:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:21.692605 | orchestrator | 2025-05-28 20:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:24.742897 | orchestrator | 2025-05-28 20:21:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:24.743001 | orchestrator | 2025-05-28 20:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:27.796798 | orchestrator | 2025-05-28 20:21:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:27.796907 | orchestrator | 2025-05-28 20:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:30.852940 | orchestrator | 2025-05-28 20:21:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:30.853044 | orchestrator | 2025-05-28 20:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:33.899442 | orchestrator | 2025-05-28 20:21:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:33.899608 | orchestrator | 2025-05-28 20:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:36.945139 | orchestrator | 2025-05-28 20:21:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:36.945237 | orchestrator | 2025-05-28 20:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:39.995572 | orchestrator | 2025-05-28 20:21:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:39.995669 | orchestrator | 2025-05-28 20:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:43.043440 | orchestrator | 2025-05-28 20:21:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:43.043564 | orchestrator | 2025-05-28 20:21:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:46.101331 | orchestrator | 2025-05-28 20:21:46 | INFO  | Task 4b8b9a58-10e9-482c-a6ef-abfa4a55b64e is in state STARTED 2025-05-28 20:21:46.101641 | orchestrator | 2025-05-28 20:21:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:46.102237 | orchestrator | 2025-05-28 20:21:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:49.172997 | orchestrator | 2025-05-28 20:21:49 | INFO  | Task 4b8b9a58-10e9-482c-a6ef-abfa4a55b64e is in state STARTED 2025-05-28 20:21:49.174011 | orchestrator | 2025-05-28 20:21:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:49.174145 | orchestrator | 2025-05-28 20:21:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:52.227772 | orchestrator | 2025-05-28 20:21:52 | INFO  | Task 4b8b9a58-10e9-482c-a6ef-abfa4a55b64e is in state STARTED 2025-05-28 20:21:52.228961 | orchestrator | 2025-05-28 20:21:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:52.229007 | orchestrator | 2025-05-28 20:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:55.276598 | orchestrator | 2025-05-28 20:21:55 | INFO  | Task 4b8b9a58-10e9-482c-a6ef-abfa4a55b64e is in state STARTED 2025-05-28 20:21:55.280549 | orchestrator | 2025-05-28 20:21:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:55.280603 | orchestrator | 2025-05-28 20:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:21:58.328689 | orchestrator | 2025-05-28 20:21:58 | INFO  | Task 4b8b9a58-10e9-482c-a6ef-abfa4a55b64e is in state SUCCESS 2025-05-28 20:21:58.330177 | orchestrator | 2025-05-28 20:21:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:21:58.330222 | orchestrator | 2025-05-28 20:21:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:01.405262 | orchestrator | 2025-05-28 20:22:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:01.405366 | orchestrator | 2025-05-28 20:22:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:04.453148 | orchestrator | 2025-05-28 20:22:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:04.453257 | orchestrator | 2025-05-28 20:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:07.499674 | orchestrator | 2025-05-28 20:22:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:07.499805 | orchestrator | 2025-05-28 20:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:10.550322 | orchestrator | 2025-05-28 20:22:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:10.550458 | orchestrator | 2025-05-28 20:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:13.599035 | orchestrator | 2025-05-28 20:22:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:13.599139 | orchestrator | 2025-05-28 20:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:16.648766 | orchestrator | 2025-05-28 20:22:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:16.648880 | orchestrator | 2025-05-28 20:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:19.696652 | orchestrator | 2025-05-28 20:22:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:19.696771 | orchestrator | 2025-05-28 20:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:22.741962 | orchestrator | 2025-05-28 20:22:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:22.742097 | orchestrator | 2025-05-28 20:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:25.793308 | orchestrator | 2025-05-28 20:22:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:25.793440 | orchestrator | 2025-05-28 20:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:28.851544 | orchestrator | 2025-05-28 20:22:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:28.851656 | orchestrator | 2025-05-28 20:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:31.917061 | orchestrator | 2025-05-28 20:22:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:31.917174 | orchestrator | 2025-05-28 20:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:34.965620 | orchestrator | 2025-05-28 20:22:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:34.965760 | orchestrator | 2025-05-28 20:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:38.022251 | orchestrator | 2025-05-28 20:22:38 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:38.022342 | orchestrator | 2025-05-28 20:22:38 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:41.102148 | orchestrator | 2025-05-28 20:22:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:41.102256 | orchestrator | 2025-05-28 20:22:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:44.153576 | orchestrator | 2025-05-28 20:22:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:44.153687 | orchestrator | 2025-05-28 20:22:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:47.211930 | orchestrator | 2025-05-28 20:22:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:47.212006 | orchestrator | 2025-05-28 20:22:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:50.259453 | orchestrator | 2025-05-28 20:22:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:50.259556 | orchestrator | 2025-05-28 20:22:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:53.313519 | orchestrator | 2025-05-28 20:22:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:53.313630 | orchestrator | 2025-05-28 20:22:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:56.373946 | orchestrator | 2025-05-28 20:22:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:56.374148 | orchestrator | 2025-05-28 20:22:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:22:59.456118 | orchestrator | 2025-05-28 20:22:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:22:59.456821 | orchestrator | 2025-05-28 20:22:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:02.506204 | orchestrator | 2025-05-28 20:23:02 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:02.506307 | orchestrator | 2025-05-28 20:23:02 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:05.558645 | orchestrator | 2025-05-28 20:23:05 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:05.558813 | orchestrator | 2025-05-28 20:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:08.607303 | orchestrator | 2025-05-28 20:23:08 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:08.607411 | orchestrator | 2025-05-28 20:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:11.662243 | orchestrator | 2025-05-28 20:23:11 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:11.662354 | orchestrator | 2025-05-28 20:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:14.715198 | orchestrator | 2025-05-28 20:23:14 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:14.715305 | orchestrator | 2025-05-28 20:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:17.765521 | orchestrator | 2025-05-28 20:23:17 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:17.765596 | orchestrator | 2025-05-28 20:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:20.822191 | orchestrator | 2025-05-28 20:23:20 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:20.822298 | orchestrator | 2025-05-28 20:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:23.873074 | orchestrator | 2025-05-28 20:23:23 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:23.873178 | orchestrator | 2025-05-28 20:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:26.921336 | orchestrator | 2025-05-28 20:23:26 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:26.921445 | orchestrator | 2025-05-28 20:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:29.970550 | orchestrator | 2025-05-28 20:23:29 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:29.970662 | orchestrator | 2025-05-28 20:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:33.019265 | orchestrator | 2025-05-28 20:23:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:33.019366 | orchestrator | 2025-05-28 20:23:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:36.058999 | orchestrator | 2025-05-28 20:23:36 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:36.059112 | orchestrator | 2025-05-28 20:23:36 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:39.108836 | orchestrator | 2025-05-28 20:23:39 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:39.109022 | orchestrator | 2025-05-28 20:23:39 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:42.149338 | orchestrator | 2025-05-28 20:23:42 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:42.149483 | orchestrator | 2025-05-28 20:23:42 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:45.196723 | orchestrator | 2025-05-28 20:23:45 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:45.196910 | orchestrator | 2025-05-28 20:23:45 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:48.243524 | orchestrator | 2025-05-28 20:23:48 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:48.243670 | orchestrator | 2025-05-28 20:23:48 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:51.286333 | orchestrator | 2025-05-28 20:23:51 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:51.286496 | orchestrator | 2025-05-28 20:23:51 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:54.335809 | orchestrator | 2025-05-28 20:23:54 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:54.335926 | orchestrator | 2025-05-28 20:23:54 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:23:57.385237 | orchestrator | 2025-05-28 20:23:57 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:23:57.385330 | orchestrator | 2025-05-28 20:23:57 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:00.438533 | orchestrator | 2025-05-28 20:24:00 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:00.438645 | orchestrator | 2025-05-28 20:24:00 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:03.490556 | orchestrator | 2025-05-28 20:24:03 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:03.490664 | orchestrator | 2025-05-28 20:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:06.539165 | orchestrator | 2025-05-28 20:24:06 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:06.539275 | orchestrator | 2025-05-28 20:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:09.585928 | orchestrator | 2025-05-28 20:24:09 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:09.586090 | orchestrator | 2025-05-28 20:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:12.639542 | orchestrator | 2025-05-28 20:24:12 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:12.639625 | orchestrator | 2025-05-28 20:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:15.688223 | orchestrator | 2025-05-28 20:24:15 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:15.688372 | orchestrator | 2025-05-28 20:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:18.734776 | orchestrator | 2025-05-28 20:24:18 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:18.734881 | orchestrator | 2025-05-28 20:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:21.783194 | orchestrator | 2025-05-28 20:24:21 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:21.783295 | orchestrator | 2025-05-28 20:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:24.827904 | orchestrator | 2025-05-28 20:24:24 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:24.828019 | orchestrator | 2025-05-28 20:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:27.879687 | orchestrator | 2025-05-28 20:24:27 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:27.879869 | orchestrator | 2025-05-28 20:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:30.930442 | orchestrator | 2025-05-28 20:24:30 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:30.930555 | orchestrator | 2025-05-28 20:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:33.985473 | orchestrator | 2025-05-28 20:24:33 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:33.985573 | orchestrator | 2025-05-28 20:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:37.032596 | orchestrator | 2025-05-28 20:24:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:37.032702 | orchestrator | 2025-05-28 20:24:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:40.073097 | orchestrator | 2025-05-28 20:24:40 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:40.073204 | orchestrator | 2025-05-28 20:24:40 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:43.124108 | orchestrator | 2025-05-28 20:24:43 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:43.124217 | orchestrator | 2025-05-28 20:24:43 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:46.161696 | orchestrator | 2025-05-28 20:24:46 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:46.161858 | orchestrator | 2025-05-28 20:24:46 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:49.216700 | orchestrator | 2025-05-28 20:24:49 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:49.216863 | orchestrator | 2025-05-28 20:24:49 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:52.260741 | orchestrator | 2025-05-28 20:24:52 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:52.260950 | orchestrator | 2025-05-28 20:24:52 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:55.305835 | orchestrator | 2025-05-28 20:24:55 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:55.305925 | orchestrator | 2025-05-28 20:24:55 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:24:58.354897 | orchestrator | 2025-05-28 20:24:58 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:24:58.355007 | orchestrator | 2025-05-28 20:24:58 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:01.408103 | orchestrator | 2025-05-28 20:25:01 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:01.408213 | orchestrator | 2025-05-28 20:25:01 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:04.461901 | orchestrator | 2025-05-28 20:25:04 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:04.462008 | orchestrator | 2025-05-28 20:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:07.506986 | orchestrator | 2025-05-28 20:25:07 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:07.507096 | orchestrator | 2025-05-28 20:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:10.552489 | orchestrator | 2025-05-28 20:25:10 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:10.552596 | orchestrator | 2025-05-28 20:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:13.600225 | orchestrator | 2025-05-28 20:25:13 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:13.601022 | orchestrator | 2025-05-28 20:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:16.644862 | orchestrator | 2025-05-28 20:25:16 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:16.644950 | orchestrator | 2025-05-28 20:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:19.691740 | orchestrator | 2025-05-28 20:25:19 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:19.691865 | orchestrator | 2025-05-28 20:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:22.741284 | orchestrator | 2025-05-28 20:25:22 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:22.741390 | orchestrator | 2025-05-28 20:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:25.792283 | orchestrator | 2025-05-28 20:25:25 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:25.792399 | orchestrator | 2025-05-28 20:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:28.843570 | orchestrator | 2025-05-28 20:25:28 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:28.843690 | orchestrator | 2025-05-28 20:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:31.894865 | orchestrator | 2025-05-28 20:25:31 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:31.894987 | orchestrator | 2025-05-28 20:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:34.947731 | orchestrator | 2025-05-28 20:25:34 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:34.947900 | orchestrator | 2025-05-28 20:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:37.995982 | orchestrator | 2025-05-28 20:25:37 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:37.996049 | orchestrator | 2025-05-28 20:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:41.050125 | orchestrator | 2025-05-28 20:25:41 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:41.050252 | orchestrator | 2025-05-28 20:25:41 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:44.085406 | orchestrator | 2025-05-28 20:25:44 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:44.085516 | orchestrator | 2025-05-28 20:25:44 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:47.129556 | orchestrator | 2025-05-28 20:25:47 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:47.129656 | orchestrator | 2025-05-28 20:25:47 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:50.179837 | orchestrator | 2025-05-28 20:25:50 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:50.179952 | orchestrator | 2025-05-28 20:25:50 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:53.231520 | orchestrator | 2025-05-28 20:25:53 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:53.231606 | orchestrator | 2025-05-28 20:25:53 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:56.279306 | orchestrator | 2025-05-28 20:25:56 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:56.279406 | orchestrator | 2025-05-28 20:25:56 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:25:59.332117 | orchestrator | 2025-05-28 20:25:59 | INFO  | Task 32fe2e71-22f4-4425-9e47-97fef8c0d312 is in state STARTED 2025-05-28 20:25:59.332248 | orchestrator | 2025-05-28 20:25:59 | INFO  | Wait 1 second(s) until the next check 2025-05-28 20:26:01.617255 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-28 20:26:01.618372 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-28 20:26:02.421893 | 2025-05-28 20:26:02.422077 | PLAY [Post output play] 2025-05-28 20:26:02.439126 | 2025-05-28 20:26:02.439298 | LOOP [stage-output : Register sources] 2025-05-28 20:26:02.519258 | 2025-05-28 20:26:02.519626 | TASK [stage-output : Check sudo] 2025-05-28 20:26:03.393589 | orchestrator | sudo: a password is required 2025-05-28 20:26:03.557361 | orchestrator | ok: Runtime: 0:00:00.016160 2025-05-28 20:26:03.564538 | 2025-05-28 20:26:03.564656 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-28 20:26:03.596519 | 2025-05-28 20:26:03.596713 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-28 20:26:03.663352 | orchestrator | ok 2025-05-28 20:26:03.671576 | 2025-05-28 20:26:03.671714 | LOOP [stage-output : Ensure target folders exist] 2025-05-28 20:26:04.133732 | orchestrator | ok: "docs" 2025-05-28 20:26:04.133989 | 2025-05-28 20:26:04.390027 | orchestrator | ok: "artifacts" 2025-05-28 20:26:04.646620 | orchestrator | ok: "logs" 2025-05-28 20:26:04.673108 | 2025-05-28 20:26:04.673283 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-28 20:26:04.715061 | 2025-05-28 20:26:04.715362 | TASK [stage-output : Make all log files readable] 2025-05-28 20:26:04.997014 | orchestrator | ok 2025-05-28 20:26:05.012752 | 2025-05-28 20:26:05.012929 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-28 20:26:05.049045 | orchestrator | skipping: Conditional result was False 2025-05-28 20:26:05.058828 | 2025-05-28 20:26:05.058984 | TASK [stage-output : Discover log files for compression] 2025-05-28 20:26:05.084691 | orchestrator | skipping: Conditional result was False 2025-05-28 20:26:05.093766 | 2025-05-28 20:26:05.094000 | LOOP [stage-output : Archive everything from logs] 2025-05-28 20:26:05.131964 | 2025-05-28 20:26:05.132141 | PLAY [Post cleanup play] 2025-05-28 20:26:05.140917 | 2025-05-28 20:26:05.141052 | TASK [Set cloud fact (Zuul deployment)] 2025-05-28 20:26:05.198917 | orchestrator | ok 2025-05-28 20:26:05.213386 | 2025-05-28 20:26:05.213594 | TASK [Set cloud fact (local deployment)] 2025-05-28 20:26:05.259847 | orchestrator | skipping: Conditional result was False 2025-05-28 20:26:05.271522 | 2025-05-28 20:26:05.271661 | TASK [Clean the cloud environment] 2025-05-28 20:26:05.872569 | orchestrator | 2025-05-28 20:26:05 - clean up servers 2025-05-28 20:26:06.604258 | orchestrator | 2025-05-28 20:26:06 - testbed-manager 2025-05-28 20:26:06.688229 | orchestrator | 2025-05-28 20:26:06 - testbed-node-4 2025-05-28 20:26:07.026132 | orchestrator | 2025-05-28 20:26:07 - testbed-node-3 2025-05-28 20:26:07.114862 | orchestrator | 2025-05-28 20:26:07 - testbed-node-2 2025-05-28 20:26:07.204615 | orchestrator | 2025-05-28 20:26:07 - testbed-node-0 2025-05-28 20:26:07.299761 | orchestrator | 2025-05-28 20:26:07 - testbed-node-1 2025-05-28 20:26:07.390058 | orchestrator | 2025-05-28 20:26:07 - testbed-node-5 2025-05-28 20:26:07.485671 | orchestrator | 2025-05-28 20:26:07 - clean up keypairs 2025-05-28 20:26:07.510889 | orchestrator | 2025-05-28 20:26:07 - testbed 2025-05-28 20:26:07.544360 | orchestrator | 2025-05-28 20:26:07 - wait for servers to be gone 2025-05-28 20:26:16.470280 | orchestrator | 2025-05-28 20:26:16 - clean up ports 2025-05-28 20:26:16.652999 | orchestrator | 2025-05-28 20:26:16 - 0a697678-bc3f-4835-87b5-b26a5639ef7e 2025-05-28 20:26:16.896725 | orchestrator | 2025-05-28 20:26:16 - 1663aaf8-b01a-4a8b-929d-23d3d05f10c3 2025-05-28 20:26:17.361809 | orchestrator | 2025-05-28 20:26:17 - 2c35ac1b-9cbf-44e7-ba9f-8bea165ab058 2025-05-28 20:26:17.605935 | orchestrator | 2025-05-28 20:26:17 - 2e4c75ef-e582-46e8-b1fb-6b9dd1082c1c 2025-05-28 20:26:17.830714 | orchestrator | 2025-05-28 20:26:17 - 335baeb9-c21a-491a-b0f0-744cc4458d5d 2025-05-28 20:26:18.039396 | orchestrator | 2025-05-28 20:26:18 - 745a3022-8af2-4872-a9b3-a4f9c7941da3 2025-05-28 20:26:18.243240 | orchestrator | 2025-05-28 20:26:18 - fa02a324-d7da-49f4-9aba-f6f40c1cb2e3 2025-05-28 20:26:18.440720 | orchestrator | 2025-05-28 20:26:18 - clean up volumes 2025-05-28 20:26:18.567934 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-2-node-base 2025-05-28 20:26:18.605591 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-0-node-base 2025-05-28 20:26:18.645337 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-1-node-base 2025-05-28 20:26:18.684052 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-5-node-base 2025-05-28 20:26:18.723285 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-4-node-base 2025-05-28 20:26:18.764054 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-3-node-base 2025-05-28 20:26:18.806343 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-manager-base 2025-05-28 20:26:18.848687 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-1-node-4 2025-05-28 20:26:18.889307 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-5-node-5 2025-05-28 20:26:18.929733 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-0-node-3 2025-05-28 20:26:18.969196 | orchestrator | 2025-05-28 20:26:18 - testbed-volume-3-node-3 2025-05-28 20:26:19.011651 | orchestrator | 2025-05-28 20:26:19 - testbed-volume-4-node-4 2025-05-28 20:26:19.051360 | orchestrator | 2025-05-28 20:26:19 - testbed-volume-2-node-5 2025-05-28 20:26:19.093309 | orchestrator | 2025-05-28 20:26:19 - testbed-volume-8-node-5 2025-05-28 20:26:19.136866 | orchestrator | 2025-05-28 20:26:19 - testbed-volume-6-node-3 2025-05-28 20:26:19.178504 | orchestrator | 2025-05-28 20:26:19 - testbed-volume-7-node-4 2025-05-28 20:26:19.219167 | orchestrator | 2025-05-28 20:26:19 - disconnect routers 2025-05-28 20:26:19.326125 | orchestrator | 2025-05-28 20:26:19 - testbed 2025-05-28 20:26:20.309738 | orchestrator | 2025-05-28 20:26:20 - clean up subnets 2025-05-28 20:26:20.362878 | orchestrator | 2025-05-28 20:26:20 - subnet-testbed-management 2025-05-28 20:26:20.556234 | orchestrator | 2025-05-28 20:26:20 - clean up networks 2025-05-28 20:26:20.684366 | orchestrator | 2025-05-28 20:26:20 - net-testbed-management 2025-05-28 20:26:20.998260 | orchestrator | 2025-05-28 20:26:20 - clean up security groups 2025-05-28 20:26:21.037525 | orchestrator | 2025-05-28 20:26:21 - testbed-management 2025-05-28 20:26:21.147199 | orchestrator | 2025-05-28 20:26:21 - testbed-node 2025-05-28 20:26:21.256479 | orchestrator | 2025-05-28 20:26:21 - clean up floating ips 2025-05-28 20:26:21.334128 | orchestrator | 2025-05-28 20:26:21 - 81.163.193.158 2025-05-28 20:26:21.680162 | orchestrator | 2025-05-28 20:26:21 - clean up routers 2025-05-28 20:26:21.742615 | orchestrator | 2025-05-28 20:26:21 - testbed 2025-05-28 20:26:23.322308 | orchestrator | ok: Runtime: 0:00:17.319996 2025-05-28 20:26:23.328882 | 2025-05-28 20:26:23.329251 | PLAY RECAP 2025-05-28 20:26:23.329475 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-28 20:26:23.329684 | 2025-05-28 20:26:23.572595 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-28 20:26:23.574069 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-28 20:26:24.367732 | 2025-05-28 20:26:24.367897 | PLAY [Cleanup play] 2025-05-28 20:26:24.384163 | 2025-05-28 20:26:24.384306 | TASK [Set cloud fact (Zuul deployment)] 2025-05-28 20:26:24.435094 | orchestrator | ok 2025-05-28 20:26:24.442085 | 2025-05-28 20:26:24.442217 | TASK [Set cloud fact (local deployment)] 2025-05-28 20:26:24.476419 | orchestrator | skipping: Conditional result was False 2025-05-28 20:26:24.497136 | 2025-05-28 20:26:24.497506 | TASK [Clean the cloud environment] 2025-05-28 20:26:25.776547 | orchestrator | 2025-05-28 20:26:25 - clean up servers 2025-05-28 20:26:26.255644 | orchestrator | 2025-05-28 20:26:26 - clean up keypairs 2025-05-28 20:26:26.271747 | orchestrator | 2025-05-28 20:26:26 - wait for servers to be gone 2025-05-28 20:26:26.312081 | orchestrator | 2025-05-28 20:26:26 - clean up ports 2025-05-28 20:26:26.390618 | orchestrator | 2025-05-28 20:26:26 - clean up volumes 2025-05-28 20:26:26.451297 | orchestrator | 2025-05-28 20:26:26 - disconnect routers 2025-05-28 20:26:26.473571 | orchestrator | 2025-05-28 20:26:26 - clean up subnets 2025-05-28 20:26:26.494852 | orchestrator | 2025-05-28 20:26:26 - clean up networks 2025-05-28 20:26:26.666133 | orchestrator | 2025-05-28 20:26:26 - clean up security groups 2025-05-28 20:26:26.719576 | orchestrator | 2025-05-28 20:26:26 - clean up floating ips 2025-05-28 20:26:26.743701 | orchestrator | 2025-05-28 20:26:26 - clean up routers 2025-05-28 20:26:27.065143 | orchestrator | ok: Runtime: 0:00:01.378317 2025-05-28 20:26:27.067164 | 2025-05-28 20:26:27.067258 | PLAY RECAP 2025-05-28 20:26:27.067314 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-28 20:26:27.067338 | 2025-05-28 20:26:27.196174 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-28 20:26:27.198742 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-28 20:26:28.002308 | 2025-05-28 20:26:28.002480 | PLAY [Base post-fetch] 2025-05-28 20:26:28.019845 | 2025-05-28 20:26:28.020001 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-28 20:26:28.088420 | orchestrator | skipping: Conditional result was False 2025-05-28 20:26:28.103401 | 2025-05-28 20:26:28.103714 | TASK [fetch-output : Set log path for single node] 2025-05-28 20:26:28.145271 | orchestrator | ok 2025-05-28 20:26:28.151739 | 2025-05-28 20:26:28.151861 | LOOP [fetch-output : Ensure local output dirs] 2025-05-28 20:26:28.646642 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/work/logs" 2025-05-28 20:26:28.941455 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/work/artifacts" 2025-05-28 20:26:29.217275 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/aad55cb4c7db435696486539db6e3f7a/work/docs" 2025-05-28 20:26:29.234119 | 2025-05-28 20:26:29.234286 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-28 20:26:30.166146 | orchestrator | changed: .d..t...... ./ 2025-05-28 20:26:30.166418 | orchestrator | changed: All items complete 2025-05-28 20:26:30.166455 | 2025-05-28 20:26:30.981666 | orchestrator | changed: .d..t...... ./ 2025-05-28 20:26:31.717635 | orchestrator | changed: .d..t...... ./ 2025-05-28 20:26:31.757596 | 2025-05-28 20:26:31.757772 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-28 20:26:31.808118 | orchestrator | skipping: Conditional result was False 2025-05-28 20:26:31.814765 | orchestrator | skipping: Conditional result was False 2025-05-28 20:26:31.841746 | 2025-05-28 20:26:31.841902 | PLAY RECAP 2025-05-28 20:26:31.841989 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-28 20:26:31.842033 | 2025-05-28 20:26:31.986704 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-28 20:26:31.987967 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-28 20:26:32.755033 | 2025-05-28 20:26:32.755204 | PLAY [Base post] 2025-05-28 20:26:32.770869 | 2025-05-28 20:26:32.771014 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-28 20:26:33.721759 | orchestrator | changed 2025-05-28 20:26:33.732417 | 2025-05-28 20:26:33.732566 | PLAY RECAP 2025-05-28 20:26:33.732641 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-28 20:26:33.732715 | 2025-05-28 20:26:33.853414 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-28 20:26:33.855894 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-28 20:26:34.671278 | 2025-05-28 20:26:34.671457 | PLAY [Base post-logs] 2025-05-28 20:26:34.682600 | 2025-05-28 20:26:34.682743 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-28 20:26:35.176019 | localhost | changed 2025-05-28 20:26:35.188003 | 2025-05-28 20:26:35.188180 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-28 20:26:35.224154 | localhost | ok 2025-05-28 20:26:35.227348 | 2025-05-28 20:26:35.227452 | TASK [Set zuul-log-path fact] 2025-05-28 20:26:35.242436 | localhost | ok 2025-05-28 20:26:35.250747 | 2025-05-28 20:26:35.250881 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-28 20:26:35.275569 | localhost | ok 2025-05-28 20:26:35.278555 | 2025-05-28 20:26:35.278658 | TASK [upload-logs : Create log directories] 2025-05-28 20:26:35.829978 | localhost | changed 2025-05-28 20:26:35.834975 | 2025-05-28 20:26:35.835139 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-28 20:26:36.368421 | localhost -> localhost | ok: Runtime: 0:00:00.007813 2025-05-28 20:26:36.374739 | 2025-05-28 20:26:36.374939 | TASK [upload-logs : Upload logs to log server] 2025-05-28 20:26:36.998640 | localhost | Output suppressed because no_log was given 2025-05-28 20:26:37.003629 | 2025-05-28 20:26:37.003971 | LOOP [upload-logs : Compress console log and json output] 2025-05-28 20:26:37.069021 | localhost | skipping: Conditional result was False 2025-05-28 20:26:37.075185 | localhost | skipping: Conditional result was False 2025-05-28 20:26:37.096746 | 2025-05-28 20:26:37.096993 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-28 20:26:37.185541 | localhost | skipping: Conditional result was False 2025-05-28 20:26:37.186111 | 2025-05-28 20:26:37.198953 | localhost | skipping: Conditional result was False 2025-05-28 20:26:37.206248 | 2025-05-28 20:26:37.206510 | LOOP [upload-logs : Upload console log and json output]